pith. machine review for the scientific record. sign in

physics.comp-ph

Computational Physics

All aspects of computational science applied to physics.

0
physics.comp-ph 2026-05-13 Recognition

Tangent-plane uncertainty outperforms random sampling for magnetic potentials

Tangent-Plane Evidential Uncertainty in Active Learning for Magnetic Interatomic Potentials

Projecting spin-force uncertainty into the plane orthogonal to local spins creates an indicator that tracks errors and selects informative训练

Figure from the paper full image
abstract click to expand
Magnetic interatomic potentials need to account for coupled lattice and spin degrees of freedom, yet constructing reliable training sets remains costly because noncollinear first-principles labels are expensive. Active learning can mitigate this cost, provided that the uncertainty estimate is physically meaningful for the magnetic-response targets that drive spin reorientation. Here we extend the $\mathrm{e}^2\mathrm{IP}$ evidential framework to magnetic machine-learning interatomic potentials by formulating the projected spin-force likelihood and the corresponding epistemic uncertainty in the tangent plane orthogonal to the local spin direction. This construction prevents the uncertainty model from allocating probability mass to a radial spin component that is absent from the constrained-moment supervision. Using bulk BiFeO$_3$ and monolayer CrTe$_2$ as benchmark systems, we show that the resulting tangent-plane epistemic uncertainty indicator $U_{\mathrm{epi}}^{\mathrm{sf}}$ correlates strongly with prediction error and selects more informative configurations than random sampling, simultaneously improving energy, force, and projected spin-force accuracy. These results demonstrate a physically interpretable and data-efficient route for constructing uncertainty-aware magnetic machine-learning interatomic potentials.
0
0
physics.comp-ph 2026-05-12 Recognition

Block-structured matmul speeds DFT integrals up to 10x on GPUs

Accelerating Locality-Driven Integration in Quantum Chemistry with Block-Structured Matrix Multiplication

KerneLDI keeps only spatially relevant blocks and multiplies them with adapted dense kernels to cut time for exchange-correlation and ab

Figure from the paper full image
abstract click to expand
Locality-driven integration is a pervasive computational pattern in quantum chemistry, arising whenever spatially localized basis functions interact through numerical quadrature or integral screening. The dominant matrix multiplications in these tasks exhibit dynamic, structured sparsity driven by spatial locality, posing significant challenges for both dense batched kernels and generic sparse formats on GPUs. We present KerneLDI, a GPU-oriented framework that addresses this regime by co-designing data layout, screening logic, and matrix-computation operators to realize block-structured matrix multiplication for locality-driven integration. KerneLDI reorganizes operand matrices into a unified block-filtered representation that retains only spatially relevant blocks, and executes the resulting contractions with customized dense block multipliers that adapt proven dense-matmul optimizations to retained block pairs. We develop and evaluate KerneLDI on exchange--correlation (EXC) integration in Kohn--Sham density functional theory, a representative and computationally critical instance of this pattern. Across diverse molecular systems, KerneLDI preserves numerical accuracy while delivering up to 10$\times$ speedup for EXC evaluation over a dense GPU baseline, scales favorably with increasing system size and multi-GPU parallelism, accelerates end-to-end self-consistent field calculations, and yields nearly 6$\times$ throughput improvement for ab initio molecular dynamics.
0
0
physics.comp-ph 2026-05-12 Recognition

Graph reordering cuts memory pressure in GPU integral evaluation

FusionRCG: Orchestrating Recursive Computation Graphs across GPU Memory Hierarchies

FusionRCG reorders recurrence steps and fuses transformations to reach 3.09x faster SCF runs while holding 75% efficiency at 64 GPUs.

Figure from the paper full image
abstract click to expand
Evaluating high-dimensional integrals via deep hierarchical recurrences is a dominant cost in quantum chemistry. While CPUs manage these efficiently, GPUs suffer a critical mismatch: limited per-thread memory is quickly overwhelmed by an explosion of simultaneously live intermediate variables. As recurrence scales, this forces massive data spilling to global memory, collapsing performance into a severe memory-bound regime. We present FusionRCG, a framework that jointly optimizes computation graph structure and GPU memory mapping. Exploiting the inherent topological flexibility of recurrence graphs, using electron repulsion integrals as an example, we contribute: (1) liveness-aware graph orchestration to minimize peak live intermediates; (2) algebraic dimensionality reduction via stepwise Cartesian-to-spherical fusion, shrinking intermediate footprints by up to $7.7\times$; and (3) an adaptive multi-tier kernel architecture routing graphs across the memory hierarchy. Evaluated on NVIDIA A100 GPUs, FusionRCG achieves up to $3.09\times$ end-to-end SCF speedup over GPU4PySCF and maintains $75\%$ parallel efficiency at 64~GPUs, successfully rescuing these workloads from memory-bound limits.
0
0
physics.comp-ph 2026-05-12 1 theorem

Neural networks prune ISAT trees to cut memory in flame simulations

Neural-ISAM: A hybrid in-situ machine learning approach for complex manifold-based combustion models in LES of turbulent flames

Hybrid method trains models on-the-fly to replace parts of adaptive tabulation for complex combustion models

Figure from the paper full image
abstract click to expand
Manifold-based combustion models decrease the cost of turbulent combustion simulations by projecting the thermochemical state onto a lower-dimensional manifold, allowing the thermochemical state to be computed separately from the flow solver. The solutions to the manifold equations have traditionally been precomputed and pretabulated, but this results in large memory requirements and significant precomputation cost even for simple models. One approach to alleviate the memory requirements is to use In-Situ Adaptive Manifolds (ISAM), which only stores solutions that are encountered during a simulation in a database built with In-Situ Adaptive Tabulation (ISAT). Even with ISAM, as the manifold complexity increases, the memory requirements can still grow too large. Another approach to reduce memory of these databases are machine learning methods, for they represent functions in a highly memory-compact manner. However, current implementations of these methods require the pregeneration of training datasets with little knowledge of the states present in a simulation. This work develops the Neural In-Situ Adaptive Manifolds (Neural-ISAM) method, which is designed to address the drawbacks of both adaptive tabulation and machine learning methods, and leverage their benefits by coupling neural networks to manifold databases on-the-fly. ISAM databases are built via ISAT, which stores the manifold solutions in a binary tree, and Neural-ISAM periodically searches this tree to identify regions that can be pruned. Neural networks are trained on the candidate regions, and these portions of the binary tree are then replaced by the trained neural network, reducing the memory requirements of the database. Neural-ISAM memory usage, computational performance, and accuracy is evaluated in LES of two turbulent flames with increasing manifold model complexity: Sandia Flame D and the Sandia Sooting flame.
0
0
physics.comp-ph 2026-05-11 2 theorems

Constitutive priors enable inverse design of elastic networks

Constitutive Priors for Inverse Design

A latent manifold of valid material laws learned from noisy data turns nonconvex optimization into a tractable process with geometry and制造 タ

Figure from the paper full image
abstract click to expand
This work introduces an end-to-end framework for inverse design of elastic networks directly in the space of constitutive behaviors. A constitutive prior is constructed from noisy stress-strain data using a latent representation that defines a manifold of admissible material laws while enforcing thermodynamic consistency. The inverse problem is formulated as a PDE-constrained optimization problem over latent constitutive variables that parameterize spatially varying material behavior. To improve robustness in the resulting nonconvex optimization, a homotopy-based continuation strategy is introduced using intermediate target point clouds generated through affine registration. Geometry matching is performed using the Chamfer distance, enabling optimization without requiring mesh correspondence between the target and reference configurations. To account for manufacturing constraints limiting abrupt spatial variation in material properties, the framework additionally incorporates a neural-network-based smoothness prior together with a graph-based smoothness metric. The proposed approach is demonstrated on several inverse design problems for elastic networks and compared against alternative optimization strategies.
0
0
physics.comp-ph 2026-05-11 2 theorems

Neural nets recover GENERIC equations with non-quadratic dissipation

Nonlinear GENERIC Informed Neural Networks (N-GINNs): learning GENERIC dynamics with non-quadratic dissipation potentials

Reparameterized networks enforce exact energy conservation and entropy production for a wider class of dissipative systems

Figure from the paper full image
abstract click to expand
We introduce Nonlinear GENERIC Informed Neural Networks (N-GINNs), a deep learning framework for discovering evolution equations of systems governed by the nonlinear GENERIC formalism (General Equation for Non-Equilibrium Reversible-Irreversible Coupling). Such systems exhibit coupled conservative and dissipative dynamics, and can be described via the superposition of a Hamiltonian flow and a generalized gradient flow. In contrast to existing approaches, our formulation incorporates generalized gradient flows via convex dissipation potentials, enabling the identification of a broader class of thermodynamically consistent dynamics, including systems with non-quadratic dissipation potentials. Thermodynamic structure is strongly enforced by construction through suitable reparameterizations of both the bivector operator and the dissipation potential, ensuring exact compliance with the first and second laws of thermodynamics. We validate the proposed approach on three representative examples: a harmonic oscillator coupled to a heat bath, an idealized chemical motor, and a one-dimensional viscoplastic model of Perzyna type. These results demonstrate the method's ability to accurately infer thermodynamically consistent models from data for systems incorporating both conservative and nonlinear dissipative dynamics.
0
0
physics.comp-ph 2026-05-11 3 theorems

Time-domain MAS models metasurface transients via GSTC

A Time-Domain Method of Auxiliary Sources for Analyzing Transient Electromagnetic Interactions with GSTC-Modeled Metasurfacess

Converting impedance conditions to causal convolutions lets auxiliary sources compute pulsed responses directly in time.

Figure from the paper full image
abstract click to expand
This paper presents a time domain (TD) formulation for modeling the transient electromagnetic response of two-dimensional (2D) metasurfaces using the Method of Auxiliary Sources (MAS) combined with the Generalized Sheet Transition Condition (GSTC). In the proposed approach, the frequency-domain impedance type GSTC is transformed into a causal, convolution-based TD representation and integrated within the MAS formulation. rfaces.
0
0
physics.comp-ph 2026-05-11 Recognition

Sparse sampling cuts 3D hyperelastic RVE training cost by 1000x

Physics-Informed Reduced-Order Operator Learning for Hyperelasticity in Continuum Micromechanics

EquiNO with Q-DEIM trains on few loading paths and recovers homogenized stresses three to four orders of magnitude faster than full-field RV

Figure from the paper full image
abstract click to expand
Physics-informed operator learning is an attractive candidate for surrogate modeling of microstructures, especially in multiscale finite-element simulations. Its practical use, however, is often limited by the high cost of loss evaluation. We address this bottleneck by combining the Equilibrium Neural Operator (EquiNO) with the QR-based discrete empirical interpolation method (Q-DEIM). EquiNO learns only the modal coefficients of reduced displacement-fluctuation and first Piola-Kirchhoff stress representations built from periodic and divergence-free bases, thereby enforcing periodicity and mechanical equilibrium by construction. Q-DEIM then identifies a small set of spatial points through a column-pivoted QR factorization of the stress basis and restricts constitutive evaluations during training to these points alone. This makes full-batch second-order optimization practical for three-dimensional representative volume elements (RVEs). Homogenized first Piola-Kirchhoff stresses are recovered directly from the offline-averaged reduced stress modes, without the need to reconstruct the full stress field at inference time. We validate the framework on two three-dimensional finite-strain hyperelastic RVEs. Q-DEIM reduces the per-step training cost by roughly three orders of magnitude relative to full-field loss evaluation, while reduced homogenization achieves speed-up factors of order $10^3$ to $10^4$ over direct full-field computations. Despite relying on only a small number of offline snapshot loading paths for basis construction, the method accurately interpolates and extrapolates both microscopic stress fields and homogenized stresses, with prediction quality improving systematically as more snapshots are added.
0
0
physics.comp-ph 2026-05-11 Recognition

AMR runs efficiently on GPUs with small grid blocks

foap4: Adaptive mesh refinement with OpenACC, MPI, and p4est

A Fortran framework with OpenACC and p4est shows practical GPU acceleration for 2D and 3D adaptive mesh refinement.

Figure from the paper full image
abstract click to expand
GPUs and other accelerators are increasingly used for scientific computing. In the future, we want to add GPU support to parallel adaptive mesh refinement (AMR) codes written in Fortran. To understand which changes are necessary to obtain good performance we have developed foap4, an AMR framework implemented in Fortran that uses OpenACC, MPI, and the p4est library. We discuss the design and implementation of the framework. Several benchmark problems are considered, in which Euler's equations of gas dynamics are solved using explicit time integration. These benchmarks are performed in both 2D and 3D, using static and adaptive meshes, for varying problem sizes on different hardware. Our results show that AMR simulations can be carried out efficiently on GPUs with OpenACC and MPI, even when using relatively small grid blocks of $8^3$ or $16^3$ cells.
0
0
physics.comp-ph 2026-05-08

Library keeps Wigner symbols exact until final float conversion

libwignernj: a reusable C/C++/Fortran/Python library for exact Wigner symbols and related coefficients

Prime-exponent factorials and multiword Racah sums confine rounding to the output step, delivering bit-correct results in single, double, or

Figure from the paper full image
abstract click to expand
We describe libwignernj, a freely available, BSD-licensed library that evaluates Wigner 3j, 6j, and 9j symbols, Clebsch--Gordan, Racah $W$, and Fano $X$ coefficients, and Gaunt coefficients over both complex and real spherical harmonics in standards-compliant C99. libwignernj represents factorials by the vector of their signed prime-exponent decomposition - a prime-factorization technique introduced for the angular-momentum coefficients by Dodds and Wiechers (Comput. Phys. Commun. 4, 268 (1972)) and refined in a long line of subsequent work - and combines that representation with the multiword-integer Racah sum of Johansson and Forss\'en (SIAM J. Sci. Comput. 38, A376 (2016)), under which every intermediate quantity is an exact rational and all rounding is confined to the final floating-point conversion. Single-, double-, and long-double-precision results are correct to the last representable bit, and IEEE 754 binary128 evaluation through libquadmath and arbitrary-precision evaluation through the GNU Multiple-Precision Floating-Point Reliable (MPFR) library are optionally exposed. libwignernj has no mandatory runtime dependencies and no caller-side initialization step, making it easy to embed across the atomic, molecular, nuclear, and electromagnetic-scattering applications in which these coefficients arise. C++, CPython, and Fortran 90 bindings ship alongside the C library. Half-integer angular momenta are encoded exactly via integer $2j$ arguments throughout the application programming interface (API). CMake-package and pkg-config files ship for drop-in integration into downstream projects, and a continuous-integration (CI) pipeline runs the full test suite on Linux (shared and static), macOS, and Windows on every push.
0
0
physics.comp-ph 2026-05-08 Recognition

Koopman-DMD recovers band dispersion and quantum geometry

Data-driven reconstruction of band dispersion and quantum geometry via Koopman dynamical mode decomposition

Modes extracted from wave data map to Floquet-Bloch states, yielding spectral functions, localization measures, and Berry curvature without

abstract click to expand
We present a data-driven framework for reconstructing band structures using Koopman operator analysis and dynamic mode decomposition (Koopman-DMD). Instead of deriving spectra from an explicit Hamiltonian, the approach reconstructs band dispersion and modal dynamics directly from spatiotemporal data, including wavefunctions and observables. This framework establishes a correspondence between Hamiltonian Floquet-Bloch decomposition and Koopman-DMD, whereby the extracted DMD modes encode frequencies, decay or growth rates, spatial profiles and projection weights. These quantities allow the reconstruction of spectral functions, local density of states, and delocalized-to-localized measures such as the inverse participation ratio. Also, these extended DMD modes enable inference of quantum-geometric and topological properties, including the quantum metric, Berry curvature and geometric phases. Applications to prototypical one- and two-dimensional tight-binding models, including disordered Su-Schrieffer-Heeger model and its Floquet and non-Hermitian variants, graphene and Haldane models, demonstrate that Koopman-DMD provides a unified route for the data-driven analysis of wave propagation, localization, and topological phases in condensed matter, photonics, and related fields.
0
0
physics.comp-ph 2026-05-08

Koopman-DMD extracts band dispersion and Berry curvature from wave data

Data-driven reconstruction of band dispersion and quantum geometry via Koopman dynamical mode decomposition

The method links dynamical modes to Floquet-Bloch decomposition, recovering dispersion, localization, and geometric phases without an input

abstract click to expand
We present a data-driven framework for reconstructing band structures using Koopman operator analysis and dynamic mode decomposition (Koopman-DMD). Instead of deriving spectra from an explicit Hamiltonian, the approach reconstructs band dispersion and modal dynamics directly from spatiotemporal data, including wavefunctions and observables. This framework establishes a correspondence between Hamiltonian Floquet-Bloch decomposition and Koopman-DMD, whereby the extracted DMD modes encode frequencies, decay or growth rates, spatial profiles and projection weights. These quantities allow the reconstruction of spectral functions, local density of states, and delocalized-to-localized measures such as the inverse participation ratio. Also, these extended DMD modes enable inference of quantum-geometric and topological properties, including the quantum metric, Berry curvature and geometric phases. Applications to prototypical one- and two-dimensional tight-binding models, including disordered Su-Schrieffer-Heeger model and its Floquet and non-Hermitian variants, graphene and Haldane models, demonstrate that Koopman-DMD provides a unified route for the data-driven analysis of wave propagation, localization, and topological phases in condensed matter, photonics, and related fields.
0
0
physics.comp-ph 2026-05-07

Poromechanical model forecasts rat glioma growth from MRI scans

An MRI-informed poromechanical model for organ-scale prediction of glioma growth

Serial imaging sets tissue stiffness and fluid flow, yielding volume predictions with 5-36 percent error and Dice overlap above 0.75.

Figure from the paper full image
abstract click to expand
Gliomas constitute one of the most aggressive and heterogeneous forms of brain tumors, posing major challenges for understanding their biology and developing effective treatments. Animal models enable the collection of rich longitudinal datasets describing tumor dynamics, which can be integrated within mathematical models to elucidate the biological mechanisms governing tumor growth. While most formulations rely on reaction-diffusion systems with limited insight on tissue deformation and fluid transport, we propose a magnetic resonance imaging (MRI)-informed, poroelastic model to describe C6 glioma growth in rats. We use data from animals (n=4) that were imaged five times after intracranial injection of cancer cells. Each MRI dataset includes (i) anatomical T1-weighted data for brain and tumor segmentation and to assign mechanical properties; (ii) diffusion-weighted MRI, which enables estimation of the fraction of each voxel that is tumor; and (iii) dynamic contrast-enhanced MRI, which informs permeability as well as vascular and liquid fraction maps. Using finite-element simulations, model calibration for each rat uses the Levenberg-Marquardt method informed by the first three MRI datasets. Tumor forecasts are validated by assessing model-data agreement on the remaining two MRI datasets. Our results show relative tumor volume errors between 0.94 percent and 11.27 percent at calibration, and prediction errors between 4.73 percent and 36.03 percent. Additionally, Dice scores ranged from 0.80 to 0.93 during calibration, and from 0.75 to 0.93 during validation. Thus, our results suggest that our poromechanical model can describe C6 glioma growth. This study provides a first step toward a patient-specific, multiscale model of the spatiotemporal poromechanics underlying glioma progression and therapeutic response.
0
0
physics.comp-ph 2026-05-07

New parallel code solves large fermionic eigenvalue problems competitively

CDFCI: High-Performance Parallel Software for Many-Body Large-Scale Eigenvalue Problems

CDFCI pairs coordinate-descent SCI with shared-memory parallelism to match CIPSI, SHCI, and DMRG accuracy on multi-core hardware.

Figure from the paper full image
abstract click to expand
CDFCI is a shared-memory parallel numerical program for computing low-lying eigenpairs of large-scale, non-relativistic fermionic Hamiltonians. The software is designed to handle a broad class of many-body quantum models, including both ab initio electronic structure Hamiltonians and lattice-based Hamiltonians arising in condensed matter physics. CDFCI combines an efficient coordinate-descent-based selected configuration interaction algorithm with dedicated parallelization strategies, achieving high performance on modern multi-core architectures. Benchmark results on representative quantum chemistry and condensed matter test cases demonstrate that CDFCI attains state-of-the-art accuracy with competitive performance compared to established selected configuration interaction (such as CIPSI or SHCI) and DMRG implementations. The software is open-source, extensively documented, and provides a Python interface for seamless integration with PySCF and other many-body simulation workflows.
0
0
physics.comp-ph 2026-05-06

GPU code speeds moving-boundary fluid simulations 20X

GPU-Accelerated Simulations of Problems with Moving Boundaries and Fluid-Structure Interaction at Extreme Scales

Sharp-interface immersed boundary method on GPUs reaches billion-point grids with over 90 percent scaling for FSI and turbulence.

abstract click to expand
Computational fluid dynamics and fluid-structure interaction simulations involving moving and deforming bodies is extremely hard. In this work, we present a graphical processing unit (GPU) optimized implementation of the sharp-interface immersed boundary method. The method allows performing simulation around complex stationary as well as moving bodies on a Cartesian grid. We base our implementation on the ViCar3D framework and make use of OpenACC, CUDA, NCCL and MPI. We test the implementation across grid sizes ranging from O(10million) to O(1billion) points and achieved a 20X speedup compared to existing CPU implementation. We next present our multi-GPU implementation by utilizing CUDA streams and NCCL communicators. This enables us to obtain a >90% strong and weak scaling efficiencies. Next we demonstrate the capability of the developed software to simulate a turbulent fluid flow and coupled fluid-structure interaction in flapping bat wing in flight at Re=5000.
0
0
physics.comp-ph 2026-05-05

Multi-fidelity models recover accurate composite predictions from sparse data

Multi-fidelity surrogates for mechanics of composites: from co-kriging to multi-fidelity neural networks

By fusing low-cost simulations with limited high-accuracy data, these methods support efficient design and optimization of complex materials

Figure from the paper full image
abstract click to expand
Composite materials exhibit strongly hierarchical and anisotropic properties governed by coupled mechanisms spanning constituents, plies, laminates, structures, and manufacturing history. This intrinsic complexity makes predictive modeling of composites expensive, because repeated experiments and high-fidelity simulations are needed to cover large design spaces of material, structure, and manufacturing. Multi-fidelity surrogate modeling addresses this challenge by combining abundant, less expensive data with limited high-accuracy data to recover reliable high-fidelity predictions. This review presents a structured overview of multi-fidelity modeling for composite mechanics, covering Gaussian-process or Kriging-based methods, including co-Kriging, coregionalization models, autoregressive formulations, nonlinear autoregressive Gaussian processes, multi-fidelity deep Gaussian processes, and multi-fidelity neural networks. Their distinctions are examined in terms of cross-fidelity correlation, discrepancy representation, uncertainty quantification, and scalability. Selected examples of their applications to composites are introduced according to the roles that multi-fidelity surrogates play in engineering problems, including forward prediction for rapid exploration of material design spaces, inverse optimization for composite parameter identification and design search under limited high-fidelity access, and workflow integration, where heterogeneous data sources, constraints, and validation requirements determine model utility. Open question discussions highlight recurring challenges specific to composites, such as regime-dependent fidelity gaps associated with nonlinear damage and manufacturing history, mismatches between simulations and experiments, and uncertainty propagation across multi-fidelity models.
0
0
physics.comp-ph 2026-05-04

Diffusion maps separate Bose-Hubbard phases from raw snapshots

Unsupervised Learning of Quantum Phase Transitions for Bose-Hubbard lattice systems

Unsupervised method detects transitions including topological order and many-body localization without order parameters.

Figure from the paper full image
abstract click to expand
Characterizing quantum many-body phase structure is a major goal for quantum simulation. Here, we employ an unsupervised learning approach based on diffusion maps to learn phase transitions in bosonic lattice systems described by Bose-Hubbard type models, which can be realized in ultracold atoms and related quantum simulation platforms. We demonstrate that this approach identifies phase structure across distinct settings without prior knowledge of order parameters or handcrafted observables, including ground-state transitions involving symmetry-protected topological phases and nonequilibrium regimes distinguishing ergodic and many-body localized behavior. Our results indicate that the approach has the potential for direct application to experimentally accessible measurement data for learning quantum phases in current quantum simulators.
0
0
physics.comp-ph 2026-05-04

Tool derives nuclear charge radii from muonic X-ray data

MuDirac 1.3.0: A Sustainable Software Tool for Calculating Ground State Nuclear Properties Using Muonic X-Ray Measurements

MuDirac 1.3.0 applies a two-parameter Fermi model to transition energies for efficient extraction of ground-state nuclear properties.

Figure from the paper full image
abstract click to expand
The nuclear charge radius is one of the most fundamental quantities of the atomic nucleus. It can be deduced from a combination of experimental measurements of muonicX-raytransitionenergieswithmodellingofthoseX-raytransitionenergies. In thisworkwepresentMuDirac (1.3.0), whichisanopen, publiclyavailable, sustainable and computationally efficient software tool that will be at put the disposal of the negative muon community. With MuDirac (1.3.0), the community will be able to accurately and efficiently estimate nuclear properties, such as the nuclear charge radius, by assuming a 2-parameter Fermi distribution of the nuclear charge.
0
0
physics.comp-ph 2026-05-01

Monte Carlo computes frequency- and time-domain Jacobians

Computation of frequency- and time-domain Jacobians in optical tomography with Monte Carlo simulations

The framework supplies accurate sensitivity profiles for optical tomography where diffusion approximations fail in low-scattering tissue.

Figure from the paper full image
abstract click to expand
Significance: Jacobians, or spatially resolved sensitivity profiles, are central to image reconstruction in model-based optical tomography of biological tissue. Although Monte Carlo (MC) simulations are the gold standard for modeling light transport in turbid media, methodology for frequency- and time-domain Jacobians remains incomplete. Aim: This work extends MC to directly compute absorption and scattering Jacobians for frequency-domain (amplitude and phase) and time-domain (intensity and mean time-of-flight) measurements and prism-terminated optical fiber detectors. Approach: Jacobians are derived in the perturbation MC framework and implemented in the high-performance, open-source Monte Carlo eXtreme (MCX) simulator. Results are validated against the diffusion approximation (DA) solved using the finite element method in neonatal head models. MC with split voxels on curved surfaces is extended to Jacobian computation. The detector model is implemented in post-processing and compared with isotropic reception at surface. Results: MC- and DA-derived Jacobians show excellent agreement only in high-scattering regimes, highlighting the importance of MC for low-scattering domains. The detector model reduces surface sensitivity and marginally increases sensitivity to deeper tissues at short (< 2 cm) source-detector separations. Conclusion: A complete theoretical framework and MC software for computing frequency- and time-domain Jacobians is provided. Realistic detector modeling is encouraged for short-separation channels.
0
0
physics.comp-ph 2026-05-01

Kolmogorov-Sinai entropy ranks observables for chaotic reconstruction

Kolmogorov-Sinai entropies identify optimal observables for prediction and dynamics reconstruction in chaotic systems

Lower-entropy measurements yield lower error when delay coordinates recover the attractor and its evolution in ergodic systems.

Figure from the paper full image
abstract click to expand
Choosing the optimal observable to model dynamical systems for which we do not know the driving equations is nearly always an ad hoc art. Takens' Delay Embedding Theorem guarantees a diffeomorphism between delay-coordinate vectors built from generic scalar observables and the underlying invariant attractor, but is agnostic to optimal observable choice, and formal bounds on reconstruction quality across observables are not known. Here we prove that, under modest technical conditions, the Kolmogorov-Sinai entropy of an observable predicts its reconstruction error of the underlying dynamics in chaotic, ergodic systems. Using the Oseledets Multiplicative Ergodic Theorem, we show that the tangent bundles of reconstructed manifolds admit an invariant Oseledets filtration diffeomorphically related across admissible observables, with Lyapunov exponents controlling the propagation of perturbations. We bound reconstruction error by a quantity monotonically related to the sum of positive Lyapunov exponents and, by the Ruelle inequality, the Kolmogorov-Sinai entropy. We validate this empirically on the Lorenz-63 attractor, the Hastings-Powell food chain, and a tetracosane molecular-dynamics trajectory, recovering Spearman rank correlations between $h^{KS,UB}$ and reconstruction RMSE up to $\rho=+0.89$ ($p=5.5\times 10^{-8}$) for the realistic tetracosane case, sharpening to $\rho=+0.97$ under added measurement noise. This provides a rigorous foundation for observable selection in chaotic systems, applicable as an a priori data-selection criterion for any data-driven modeling pipeline.
0
0
physics.comp-ph 2026-04-29

Mixture-of-experts potentials run atomistic simulations 2x faster

Mixture of Experts Framework in Machine Learning Interatomic Potentials for Atomistic Simulations

Co-training enforces bulk consistency between high- and low-capacity models while preserving exact energy conservation and mechanical match.

Figure from the paper full image
abstract click to expand
First-principles atomistic simulations are essential for understanding complex material phenomena but are fundamentally limited by their computational cost. While Machine Learning Interatomic Potentials (MLIPs) have drastically improved cost for a given accuracy, their inference cost remains a bottleneck for massive systems or long timescales. To address this, we introduce a multifidelity "Mixture-of-Experts" framework based on the E(3)-equivariant Allegro architecture. Our method spatially partitions the simulation domain into a chemically complex region (e.g., reactive interfaces) and a simple region (e.g., bulk lattice), assigning models of varying capacity to each. Among the challenges in such static domain decomposition, the mechanical mismatch between models at the interface is particularly critical, as it can generate artificial stress fields and instability. We address this challenge with a co-training strategy in which the loss function includes agreement constraints -- penalties on per-atom energy and force discrepancies between models evaluated on shared bulk environments -- forcing the independent models to learn a consistent physical description of the bulk material. We validate this approach on a realistic Pt+CO catalytic system, demonstrating that the co-trained models maintain exact energy conservation, align their bulk mechanical response (e.g., equation of state and bulk modulus), and achieve predictive accuracy comparable to a full high-fidelity simulation at more than twice the computational speed.
0
0
physics.comp-ph 2026-04-29

GPU methods speed finite-element PAW DFT by 8-20x

Accelerating finite-element-based projector augmented-wave density functional theory calculations with scalable GPU-centric computational methods

Mixed-precision and approximate-inverse techniques keep chemical accuracy while scaling to 130,000-electron systems.

Figure from the paper full image
abstract click to expand
Accurate large-scale Kohn-Sham density functional theory (DFT) calculations are essential for modeling complex material systems, including interfaces, defects, nanoclusters, and twisted two-dimensional heterostructures. Achieving chemical accuracy at scales of $10^4$-$10^5$ electrons with practical time-to-solution, however, remains challenging for existing DFT implementations. We present GPU-centric computational methods and algorithmic innovations within a finite-element (FE) discretized projector augmented-wave (PAW) formulation (PAW-FE) for accurate, efficient, and scalable electronic-structure calculations on modern exascale systems. The FE discretization, developed within a collinear spin formalism, accommodates generic boundary conditions and employs multi-resolution quadrature for accurate evaluation of atom-centered PAW integrals on coarse grids. The resulting generalized Hermitian eigenproblem is solved using residual-based Chebyshev filtered subspace iteration (R-ChFSI). Exploiting R-ChFSI's tolerance to inexact matrix-multivector products, we employ an approximate inverse PAW overlap matrix, mixed-precision arithmetic (FP32/TF32), and low-precision nearest-neighbor communication (BF16) during filtered subspace construction, along with block-wise computation-communication overlap to reduce cost while preserving robustness. These strategies yield up to $8\times$ and $20\times$ CPU-GPU speedups on Intel and AMD GPU architectures, respectively. Compared to plane-wave PAW methods, PAW-FE achieves close to 8$\times$ reduction in time-to-solution for 10,000-electron systems on NVIDIA GPUs, with larger gains at scale, and around 6$\times$ over norm-conserving FE approaches. We demonstrate scalability to 130,000-electron systems, establishing PAW-FE as an exascale-ready method for chemically accurate first-principles simulations.
0
0
physics.comp-ph 2026-04-29

Linear algebra routines solve quantum eigenvalue problems on computers

Basic linear algebra methods for quantum problems

A review covers eigenvalue, QR, LU and related decompositions that turn intractable hand calculations into efficient library calls.

Figure from the paper full image
abstract click to expand
Making new methods for quantum problems often relies on using basic operations in linear algebra. Often these routines are hidden behind well-known libraries that have been optimized over decades. Attempting to improve on those basic routines would be highly time-consuming. We aim in this article to review those basic routines and provide a knowledge foundation for how to perform basic operations on a computer that would be inaccessible with pen and paper. Elementary details on the solutions to linear algebra problems and computational complexity are reviewed. The focus is on solving eigenvalue problems for quantum systems, but the discussion is generic to many other applications. Common matrix forms relevant to quantum systems and their solution strategies are covered. The discussion extends to computational numerical methods for which the most efficient functions exist in freely available libraries. These include eigenvalue, Schur, QR, LU, LDL, Cholesky, and singular value decompositions. The algorithms for obtaining some of these decompositions are discussed, with focus being placed on those used in modern libraries.
0
0
physics.comp-ph 2026-04-29

Two-rail encoding realizes classical dissipation exactly on quantum hardware

Deterministic Realization of Classical Dissipation on Quantum Computers

MRT relaxation step achieves success probability one, removing the multiplicative decay that limits scaling in quantum LB simulations.

Figure from the paper full image
abstract click to expand
Lattice Boltzmann (LB) on quantum devices must reconcile unitary gate evolution with the dissipative \emph{collision} step. In the multiple-relaxation-time (MRT) class, we work in the common setting of \emph{modewise diagonal} moment relaxation, $\delta m_r'=\lambda_r\,\delta m_r$ with $\lambda_r\in[-1,1]$ (overrelaxation if $\lambda_r<0$). Embedding that contraction in a unitary by block encoding or a linear combination of unitaries (LCU) typically yields subunitary success probability that decays multiplicatively across modes, sites, and time, a key bottleneck for quantum LB. \emph{For the dissipative MRT block alone} we give a \emph{block-encoding-free} construction: a signed \emph{two-rail} population encoding, then a completely positive trace-preserving (CPTP) map (per-rail amplitude damping with survival $|\lambda_r|$ and, if $\lambda_r<0$, a rail SWAP) so that, after the decode, the map agrees with classical MRT relaxation exactly (expectations of the rail number operators, common encoding--decode scale). Trace preservation gives success probability $1$ for that substage. The main result is the dissipative MRT block; construction of the equilibrium moment vector~$m^{\mathrm{eq}}=Mf^{\mathrm{eq}}$ (prescribed~$f^{\mathrm{eq}}$, host moment matrix~$M$; notation as in Section~\ref{subsec:generic-mrt}), moment transforms, streaming, and boundaries are composed with it as in a standard host pipeline and lie outside the scope of the formal theorem. Hybrid and fully coherent encodings, adaptive scales, Carleman-based context, and a one-rail no-go in the same nonnegative population framework are in the main text. Audits of the open-channel map on a long LBM collide-stream simulation and on stencil-free inputs both match the target to machine precision.
0
0
physics.comp-ph 2026-04-29

Flow matching produces fast calibrated snowfall ensembles

Conditional Flow Matching for Probabilistic Downscaling of Maximum 3-day Snowfall in Alaska

The model improves spectral fidelity by 87.8 percent over bicubic downscaling while generating full 50-member probabilistic outputs on a lap

Figure from the paper full image
abstract click to expand
Precipitation in complex terrain is governed by orographic processes operating at scales of a few kilometers, yet climate models typically run at resolutions of 50--100~km where this topographic detail is absent. Dynamical downscaling with high-resolution regional models such as WRF can resolve these processes, but the computational cost -- months of wall-clock time per scenario -- precludes the large ensembles needed for uncertainty quantification. We present WxFlow, a conditional generative model based on flow matching that learns to map coarse-resolution climate model output and high-resolution topography to calibrated probabilistic ensembles of fine-scale precipitation fields. Applied to 4~km WRF simulations of maximum 3-day snowfall over southeast Alaska, WxFlow achieves 87.8\% improvement in spectral fidelity and dramatically lower Continuous Ranked Probability Scores relative to conventional lapse-rate-corrected bicubic downscaling, while generating 50-member ensembles in seconds on a laptop. Ensemble spread is spatially coherent and governed by topography, reflecting physically plausible uncertainty structure. All code is available at https://github.com/glide-ism/wrf-flow.
0
0
physics.comp-ph 2026-04-28

1D model predicts Ti-6Al-4V phases from LPBF settings

Microstructure engineering of Ti-6Al-4V in laser powder bed fusion via 1D thermal modeling and supporting experiments

Framework matches experiments and explores 2000 parameter combinations much faster than detailed simulations to guide microstructure control

Figure from the paper full image
abstract click to expand
The microstructure of Ti-6Al-4V has a decisive impact on its mechanical performance; however, controlling phase composition during Laser Powder Bed Fusion (LPBF) remains difficult because of the inherent localized and cyclic thermal history. To fully leverage the design flexibility of LPBF while maintaining an efficient process, it is desirable to tailor the microstructure directly through process-parameter optimization rather than relying on post-processing or in-situ heat treatments. Nevertheless, the large and multidimensional parameter space, combined with the limited availability of experimental data, makes this task particularly challenging. In this work, we develop an efficient computational framework that links process conditions to microstructure evolution by coupling a phase transformation model with a fast 1D finite-difference thermal model, enabling comprehensive insights into process-microstructure relations. The framework predicts the fractions of stable $\alpha_s$, martensitic $\alpha_m$, and $\beta$ phases and is validated experimentally. A broad design of experiments covering 2,000 parameter combinations (spanning volumetric energy density, layer thickness, interlayer time, and build plate temperature) demonstrates how these parameters influence phase evolution and provides systematic practical guidelines for process design. The framework reproduces experimental trends with sufficient accuracy while being orders of magnitude faster than high-fidelity simulations, enabling rapid exploration of process-structure relationships in LPBF of Ti-6Al-4V.
0
0
physics.comp-ph 2026-04-28

PIC code module adds axions that match analytic plasma emissivities

An axion framework for Particle-in-Cell codes with Monte-Carlo sampling: emission, absorption, and detailed balance in plasmas

Percent-level agreement with screened calculations and steady-state relaxation are shown for Primakoff, Compton, and bremsstrahlung channels

Figure from the paper full image
abstract click to expand
We present an extension of the OSIRIS particle-in-cell (PIC) code that introduces an axion macroparticle species and three axion-production channels commonly used in thermal-plasma axion phenomenology: screened Primakoff conversion $(\gamma + Z \leftrightarrow a + Z)$, Compton-like photoproduction on electrons in a blackbody photon bath $(\gamma + e \to a + e)$, and thermal axion bremsstrahlung from electron-ion and electron-electron scattering $(e + Z \to e + Z + a$ and $e + e \to e + e + a)$. The package is integrated into the existing OSIRIS quantum-electrodynamics (QED) Monte Carlo infrastructure and provides Poisson macro-event sampling with unbiased weight rescaling for variance control. Optional modules implement conservative cell-local energy and momentum feedback and temperature-field evolution, and each channel includes an inverse absorption operator constructed to satisfy detailed balance with a thermal bath. We benchmark forward spectral emissivities for uniform plasmas at $T_e = 1.3~\mathrm{keV}$, $3~\mathrm{keV}$, and $5~\mathrm{keV}$ against screened analytic results based on Raffelt-style calculations, finding percent-level agreement in integrated power for all channels and good reproduction of spectral peak positions. In addition, homogeneous relaxation tests with forward and inverse operators enabled show that, for all three implemented channels, the axion population and total axion energy evolve toward stable steady-state values, providing an initial validation of detailed-balance recovery in the inverse-process implementation. These results establish a foundation for kinetic simulations of axion production, absorption, and transport in high-energy-density plasmas, while more extensive validation of feedback physics and fully dynamic multidimensional coupled scenarios remains future work.
0
0
physics.comp-ph 2026-04-28 Recognition

Conservative eigenvectors place exact zeros in shear density row

On Reconstructing Conservative and Primitive Variables: An Eigenvector Analysis on Curvilinear Grids

Metric-free isolation of contacts enables rank-one entropy correction on curvilinear grids

abstract click to expand
In wall-modelled large-eddy simulations of hypersonic boundary-layer transition, Hoffmann, Chamarthi and Frankel reported that characteristic reconstruction based on conservative-variable eigenvectors produced markedly better results than the corresponding primitive-variable implementation. The observation was empirical. A subsequent wave-appropriate conservative reconstruction (WA-CR) algorithm used a rank-one entropy correction based on the premise that contact-discontinuity error lies in a single conservative entropy/contact direction. This note gives the algebraic foundation for both observations. For the standard conservative curvilinear eigenvectors, the density row of the right-eigenvector matrix contains exact, metric-free zeros in the shear columns, so shear waves carry no density perturbation and a contact discontinuity is represented by the conservative entropy eigenvector alone. The conservative left eigenvectors provide the dual projection property: the entropy amplitude is obtained with a metric-independent left eigenvector and has unit contact scaling, while total-energy perturbations have zero projection onto the shear amplitudes. In the standard primitive curvilinear eigenvectors, by contrast, shear right eigenvectors contain metric-dependent density components and the primitive entropy left eigenvector contains metric-weighted tangential-velocity terms. Thus the conservative formulation supplies the two algebraic requirements for an exact, sufficient, metric-invariant, rank-one entropy correction: metric-independent entropy projection and a metric-independent entropy update direction. Curvilinear metrics make the distinction explicit, but the conservative state-space contact direction is already the natural direction underlying WA-CR even on Cartesian grids.
0
0
physics.comp-ph 2026-04-28

Fractal prior boosts ML flow predictions only in matching regimes

Learning subgrid interfacial area in two-phase flows with regime-dependent inductive biases

Physics-constrained model cuts errors in corrugation but loses edge when droplets break up

Figure from the paper full image
abstract click to expand
The reliability of machine learning in multiscale physical systems depends on how physical structure is embedded into the learning process. We investigate this in the context of turbulent multiphase flows, focusing on the prediction of subgrid interfacial area density, a key quantity governing interphase transport that remains unresolved in large-eddy simulations. In this work, we develop and evaluate two machine learning subgrid closure models to predict the three-dimensional subgrid interfacial area density: a purely data-driven 3D encoder-decoder network, and a physics-constrained variant regularized by a fractal geometric prior. Across a range of Weber numbers, the physics-based model improves predictive accuracy, reduces error variance, and suppresses nonphysical artifacts relative to purely data-driven approaches. We also show that these gains are regime-dependent: the embedded inductive bias enhances generalization in corrugation-dominated regimes where its underlying assumptions hold, but becomes ineffective in fragmentation-dominated regimes characterized by topology change and droplet breakup. These results reveal a broader principle for scientific machine learning: the utility of physics-informed models depends not only on the presence of inductive bias, but on its alignment with the governing physical regime. This suggests a path toward regime-aware learning frameworks for modeling of complex multiscale systems.
0
0
physics.comp-ph 2026-04-27 2 theorems

Graph neural net predicts HEA energies at first-principles RMSE

Crystal Fractional Graph Neural Network for Energy Prediction of High-Entropy Alloys

Fusing 16-atom local attention with global element fractions yields accurate results even on low-energy alloy configurations.

Figure from the paper full image
abstract click to expand
High-entropy alloys (HEAs) have attracted growing attention for their exceptional mechanical and thermal properties arising from complex atomic configurations. In this paper, we propose crystal fractional graph neural network for predicting the energy of high-entropy alloys by explicitly integrating both local atomic environments and global compositional information. The model consists of three components: a crystal graph neural network, which employs graph attention network layers to learn local interactions among 16 on-site atoms within the crystal lattice; fractional neural network, a fully connected network that embeds the global fraction of constituent elements; and feature fusion neural network, which fuses the outputs of the two submodels to predict the total crystal energy. We train the model on a dataset of 1,049 crystal structures and validate it on 198 quaternary structures, optimizing all hyperparameters via Optuna. Our results show that our model achieves an RMSE comparable to first-principles calculations and maintains high accuracy even for low-energy configurations. However, the model exhibits limitations in handling large crystal cells, which we aim to address in future work to extend its applicability to more complex systems.
0
0
physics.comp-ph 2026-04-27

Validation-driven AI fixes errors in supercomputer simulation workflows

LARA: Validation-Driven Agentic Supercomputer Workflows for Atomistic Modeling

Dry-run checks and iterative refinement correct syntactic and physical inconsistencies in atomistic DFT scripts before they run on HPC.

Figure from the paper full image
abstract click to expand
Large language models (LLMs) and agentic systems have recently demonstrated potential for automating scientific workflows, including atomistic simulations. However, their deployment in high-performance computing (HPC) environments remains limited by the lack of mechanisms ensuring correctness, reproducibility, and safe interaction with computational resources. Generated workflows suffer from inconsistencies, incorrect API usage, or invalid physical configurations - leading to failed or unreliable simulations. In this work, we introduce LARA-HPC, a validation-driven agentic framework to enable reliable workflow generation for atomistic modeling on HPC systems. Our approach is based on three key components: (i) a controlled execution layer that mediates all interactions with HPC resources; (ii) simulation-native validation through dry-run capabilities, enabling execution-level verification without incurring resource cost; and (iii) a multi-phase agentic pipeline combining retrieval-augmented generation and iterative refinement. We demonstrate the effectiveness of this approach performing an end-to-end atomistic simulation workflow on HPC by applying LARA-HPC to Density Functional Theory simulations. The results show that validation-driven generation significantly improves robustness and enables iterative correction of both syntactic and physical inconsistencies. More broadly, this work advocates for a shift from generation-first to validation-first paradigms in Artificial Intelligence (AI) assisted scientific computing. We argue that the future task of the computational physics community is to develop domain specific agentic systems based on structured tooling to realize an HPC enabled co-piloted research ecosystem.
0
0
physics.comp-ph 2026-04-24

Thin-sheet VIE solver includes normal fields for metasurface modeling

A Thin Sheet Volume Integral Equation Solver for Simulation of Bianisotropic Metasurfaces

The method reduces volume equations to surfaces while treating tangential and normal fluxes distinctly to enforce complete bianisotropic GST

Figure from the paper full image
abstract click to expand
A thin-sheet (TS) volume integral equation (VIE) formulation incorporating generalized sheet transition conditions (GSTCs) is presented for the simulation of three-dimensional (3D) bianisotropic metasurfaces. The metasurface is represented as an equivalent TS, with its constitutive tensors derived from the GSTC susceptibility tensors. Invoking the TS approximation, the governing VIEs are reduced to surface integral equations (SIEs), in which tangential and normal flux density components are treated as distinct sets of unknowns and discretized using Rao-Wilton-Glisson and pulse basis functions, respectively. In contrast to conventional GSTC approaches based on conventional SIEs, which represent only tangential fields, the proposed framework rigorously enforces the bianisotropic GSTCs, including normal field interactions, while retaining the flux-based VIE character of the formulation. Numerical examples demonstrate the accuracy and robustness of the proposed TS-VIE-GSTC solver for polarization rotation, perfect reflection, multi-directional attenuation, and oblique phase-shift transformation.
0
0
physics.comp-ph 2026-04-24

GROMACS gains direct support for PyTorch neural network potentials

Enabling Biomolecular Simulations with Neural Network Potentials in GROMACS

The interface lets users run hybrid ML/MM simulations of peptides, proteins and ligands while keeping access to all standard sampling and fe

Figure from the paper full image
abstract click to expand
Neural network potentials (NNPs) are rapidly changing the landscape of state-of-the-art molecular dynamics (MD) simulations. To make full use of this development, the community needs flexible, easy-to-use interfaces firmly integrated with existing methodologies. To address this, we here present an interface for hybrid machine learning/molecular mechanics (ML/MM) simulations implemented in the widely used MD code GROMACS. The interface enables NNPs trained in the PyTorch framework to contribute energies and forces during MD simulations, either for selected subsets or entire molecular systems. By defining a flexible set of model inputs and outputs, the interface is agnostic to specific NNP architectures and can accommodate a wide range of descriptor-based and message-passing models. In particular, the design integrates NNP inference seamlessly into the extensive GROMACS molecular simulation ecosystem, providing users with the capability to straightforwardly combine NNPs with existing advanced sampling and free energy workflows. We demonstrate the capabilities of the interface using several representative applications, including enhanced sampling of peptide torsional free energy landscapes, absolute solvation free energy calculations, and protein--ligand simulations. We also run performance benchmarks on water boxes for several different NNP architectures. Our interface is available in recent GROMACS releases, and we believe it will provide a practical foundation for incorporating machine learning potentials into production MD simulations of biomolecular systems.
0
0
physics.comp-ph 2026-04-24

Müller equation cancels hypersingularity via kernel difference

A High-Order Nodal Galerkin Formulation for the M\"uller Equation: Bypassing Divergence Conformity via Kernel Cancellation

Exact cancellation reduces kernels to weakly singular form, enabling high-order nodal bases on curved surfaces and robust iterative solves.

Figure from the paper full image
abstract click to expand
The M\"{u}ller boundary integral equation for penetrable electromagnetic scattering is conventionally discretized using divergence-conforming basis functions, a restriction inherited from the PMCHWT framework. This paper demonstrates that this constraint can be bypassed. The double-gradient operator in the M\"uller formulation acts on the kernel difference $\varphi_a - \varphi_i$, so that the $\mathcal{O}(R^{-3})$ hypersingularity cancels identically, reducing the operators to weakly singular $\mathcal{O}(R^{-1})$ kernels. Exploiting this cancellation, we develop a nodal, high-order Galerkin formulation using $\mathrm{P}_2$ isoparametric shape functions on curved manifolds. The surface vector field is constructed via a metric-weighted orthonormal tangent frame. The singular integrals are evaluated by Sauter--Schwab quadrature, and a Morton-ordered Block Jacobi preconditioner is introduced. By capturing the dominant near-field interactions within geometrically clustered diagonal blocks, it yields robust, superlinear GMRES convergence under extreme material and geometric parameters. Validation against semi-analytical EBCM references confirms high-order spatial accuracy and optical-theorem satisfaction to high precision.
0
0
physics.comp-ph 2026-04-23

MCSs reinforce the Madden-Julian Oscillation

Two-Way Feedback Mechanisms between the Madden-Julian Oscillation and Mesoscale Convective Systems

Enhanced MCS activity in favorable MJO phases generates circulation anomalies that support eastward propagation of the convective envelope.

abstract click to expand
The Madden-Julian Oscillation (MJO) is a planetary-scale convective system characterized by large-scale envelopes of enhanced and suppressed convection that contain numerous mesoscale convective systems (MCSs). While MCSs are widely recognized as the fundamental convective elements embedded within the MJO, their relationship with the MJO is intrinsically two-way: the MJO modulates the large-scale dynamical and thermodynamic environment that organizes MCS activity, while the collective upscale impacts of MCSs feed back onto the MJO through the transport of momentum and heat. However, the nature of this bidirectional interaction remains insufficiently quantified from an observational perspective. In this study, we use satellite-based MJO indices together with a long-term, objectively tracked MCS dataset to investigate the two-way feedback mechanisms between the MJO and MCSs. By compositing MCS activity across different MJO phases and analyzing their environmental conditions, we quantify how the evolving MJO circulation regulates MCS frequency, intensity, and organization. At the same time, we diagnose the aggregate influence of MCS populations on the large-scale MJO circulation through their associated momentum and thermodynamic anomalies. Our results reveal a robust two-way coupling between the MJO and MCSs. Enhanced MCS activity preferentially occurs in specific MJO phases associated with favorable moisture, instability, and vertical shear, indicating strong MJO control on MCS organization. Conversely, periods of enhanced MCS activity are associated with coherent large-scale circulation anomalies consistent with upscale transport of momentum and moisture that reinforce the MJO convective envelope and support its eastward propagation. This feedback suggests that MCSs are not merely passive responses to the MJO environment, but actively contribute to its maintenance and evolution.
0
0
physics.comp-ph 2026-04-23

Environmental factors explain half of tropical MCS monthly variance

Modulation Effects of Atmospheric Environmental Conditions on Mesoscale Convective Systems over Tropical Oceans

Random forest analysis of satellite-tracked storms identifies moisture convergence, instability, and water vapor as top controls that shift,

abstract click to expand
Mesoscale convective systems MCSs play a central role in tropical rainfall and are closely linked to extreme precipitation and large scale variability. However, a quantitative understanding of their environmental controls remains incomplete. In this study, we construct an observational MCS dataset by applying an objective tracking algorithm to satellite and reanalysis data, and examine the climatology of tropical MCSs. We further use a Random Forest model to quantify environmental controls at the monthly scale. The results show pronounced spatial and seasonal variability in tropical MCS activity, closely tied to large scale circulation and moisture availability. Environmental predictors explain up to about 50\% of the variance in monthly MCS frequency and associated precipitation. Moisture convergence atmospheric instability and column integrated water vapor emerge as the leading controlling factors. Partial dependence analyses reveal clear nonlinear interactions among key predictors. The relative importance of environmental controls also varies with region and season, with thermodynamic factors dominating in some regimes and dynamic factors such as vertical wind shear playing a larger role in others. Overall, this study provides an observationally constrained quantification of environmental controls on tropical MCSs and offers new insight into their variability and potential response to climate variability and change.
0
0
physics.comp-ph 2026-04-23

New particle code simulates laser-plasma radiation hydrodynamics

SPRAY: A smoothed particle radiation hydrodynamics code for modeling high intensity laser-plasma interactions

It uses smoothed particles and mesh-free rays to track energy in deforming targets hit by intense lasers.

Figure from the paper full image
abstract click to expand
Here we report the development of SPRAY, a massively parallel GPU accelerated, smoothed particle hydrodynamics (SPH)-based, radiation hydrodynamics (RHD) code designed specifically for simulating high intensity laser-plasma interactions. When a target is irradiated by an intense laser, highly complex fluid deformation occurs due to instabilities, which is challenging to study numerically. SPRAY is particle-based, mesh-free, and Lagrangian, which addresses numerical issues that posed difficulties to existing methods. Its SPH formulations for RHD governing equations are tailored toward accurate and reliable simulations of laser-target irradiation phenomena, and are solved via a time-dependent, flux-limited diffusion method. A new laser energy coupling module, which is based on the Wentzel-Kramers-Brillouin (WKB) approximation, is implemented with a totally mesh-free ray-tracing scheme that is applicable for arbitrary geometry and dimensions. The accuracy and reliability of the code are demonstrated with a series of benchmark problems. To the authors' knowledge, this is the first attempt to employ SPH method for simulations of laser-plasma interactions in high energy density physics research. Possible expansions to the code, such as laser beam-beam interaction modeling and more sophisticated multi-group radiation transport are left for future development.
0
0
physics.comp-ph 2026-04-22

Surrogate models enable fast tritium analysis for fusion pilot plants

Multiscale Assessment of Tritium Behavior in Preliminary Fusion Pilot Plant Design Using Surrogate Models in TMAP8

Multiscale integration in TMAP8 quantifies retention and loss to guide plasma-facing component optimization in normal and bake-out operation

Figure from the paper full image
abstract click to expand
The complexity and significance of multiscale phenomena in fusion energy systems make advanced modeling necessary for designing, optimizing, and safely deploying fusion plants. Tritium accountancy is one of those challenges for deuterium-tritium fusion systems. Its availability is constrained by its short half-life (12.33 years) and limited natural abundance, which require fusion plants to breed tritium onsite. Therefore, accurate tritium accountancy is essential for effective resource management, safety, and economics in fusion plants. Through the U.S. Department of Energy milestone program, Tokamak Energy Ltd. is developing a fusion pilot plant design and evaluating tritium retention and loss in key components and their effect on the fuel cycle. To rapidly explore design trade-offs and quantify design decisions on tritium management, this study presents a multiscale analysis to investigate tritium diffusion, trapping, and recovery in key plasma-facing components. To enhance computational efficiency, we integrate surrogate models at the component-level within a fuel cycle model at the system-level, enabling rapid evaluation of tritium recycling dynamics and inventory under various operational scenarios. The goal of this study is twofold: (1) demonstrate the feasibility of utilizing surrogate models to increase the accuracy of fuel cycle modeling, and (2) rapidly evaluate the performance of fusion technologies to accelerate design iterations. This multiscale model provides the tritium transport and retention behavior and supports the plasma-facing components design optimization in normal and bake-out operations. The work is implemented using the Tritium Migration Analysis Program, Version 8 (TMAP8), an open-source application for tritium transport analysis in fusion systems.
0
0
physics.comp-ph 2026-04-22

Neural operator maps microstructure to granular failure envelopes

Neural Operator Representation of Granular Micromechanics-based Failure Envelope

The model predicts stress limits without repeated simulations and enforces physical convexity through a built-in regularization term.

Figure from the paper full image
abstract click to expand
Micromechanics-based granular models are widely used to predict the failure behavior of porous and particulate materials, including concrete, soils, foams, and biological tissues. Although these models offer considerable flexibility through microstructural parametrization and statistical representation, their mapping to macroscopic responses, particularly failure envelopes, is implicit and requires costly nonlinear, non-smooth simulations, where each failure point is obtained by following a loading trajectory. This limitation is further amplified in inverse settings, where one seeks microstructure configurations that reproduce a target failure response. In this work, we propose a differentiable neural operator that learns the mapping from microstructure configurations to failure envelopes, enabling efficient forward prediction and inverse identification without repeated micromechanical simulations. To ensure mechanical admissibility, we incorporate a physics-informed training strategy that enforces convexity of the predicted envelopes, consistent with Drucker's postulate, thereby eliminating potential non-physical artifacts. We also compare finite difference and automatic differentiation for evaluating the proposed regularization, and find that finite difference provides a favorable practical trade-off in the present DeepONet-based setting. The operator is trained on failure envelopes represented as irregular point clouds, allowing learning from data sampled at heterogeneous resolutions. To further reduce computational cost, we introduce an active learning strategy that adaptively queries the micromechanical model in regions of high epistemic uncertainty. This leads to efficient exploration of the parameter space with fewer high-fidelity simulations. The versatility and performance of the method are demonstrated and benchmarked through several numerical examples.
0
0
physics.comp-ph 2026-04-21

Iterative method recovers 3D nanostructures from one or two X-ray angles

Nonuniform Iterative Phasing Framework and Sampling Requirements for 3D Dynamical Inversion from Coherent Surface Scattering Imaging

New framework handles dynamical scattering and nonuniform sampling in grazing-incidence data while deriving minimal experimental needs for 3

Figure from the paper full image
abstract click to expand
Coherent surface scattering imaging (CSSI) is an emerging experimental technique uniquely suited to probing the structure of thin nanostructures. In these experiments, a specimen is placed on a substrate, and a series of X-ray diffraction patterns is collected at grazing incidence angles as the specimen is rotated. However, reconstructing the specimen's 3D structure from the data is challenging due to dynamical scattering effects induced by the experimental geometry and the lack of direct phase measurements. Specifically, the data involves nonuniformly sampled Fourier-transform values of the specimen density, and failure to effectively address this nonuniformity can lead to errors or degraded performance. Here we introduce a mathematical inversion framework that combines iterative-projection-based phasing techniques with new fast nonuniform Fourier inversion methods to efficiently reconstruct isolated 3D structures from their CSSI rotation-series data. We also analyze the theoretical properties of CSSI reconstruction to derive requirements on experimental parameters and characterize solution uniqueness. We validate our approach using CSSI data simulated from a conical Siemens star and a porous medium, demonstrating that high-resolution 3D structures can be reconstructed even in the presence of significant dynamical scattering, from data collected at as few as one or two incident angles. More broadly, the presented nonuniform reconstruction framework provides a foundation for solving challenging generalizations of the phase problem in which measurements involve nonlinear combinations of nonuniformly sampled Fourier values.
0
0
physics.comp-ph 2026-04-21

Geometry fixes the damping law for non-spherical contacts

Consistent control of energy dissipation in non-spherical particle contact via a structure-preserving formulation

Projected dynamics and energy-phase transformation set the unique admissible dissipation structure, keeping contact-point restitution e_cn,

Figure from the paper full image
abstract click to expand
The control of energy dissipation in non-spherical particle contact remains an unresolved problem. Unlike spherical contact, where the interaction reduces to a one-dimensional normal oscillator, both the effective inertia and the effective stiffness depend on the evolving contact geometry, and the impact dynamics are intrinsically coupled across translational, rotational, and tangential directions. Classical damping formulations are therefore structurally incompatible with the contact dynamics they are intended to represent. This work addresses the problem from first principles. By projecting the dynamics onto contact degrees of freedom, the interaction is shown to be governed by an instantaneous contact dynamics with a configuration-dependent projected mass and intrinsic translational--rotational coupling. Building on the exact energy--phase transformation for monotone conservative contact, we show that consistent dissipation requires a unique damping structure aligned with the underlying contact energy. The analysis leads to two central consequences. First, the admissible damping law is not empirical but fixed by the harmonic structure revealed in transformed space. Second, the appropriate coefficient of restitution for non-spherical particles is the contact-point restitution $e_{cn}$, whereas the total energy restitution $e_E$ is a geometry-dependent outcome that includes coupling-induced energy transfer. Numerical evidence based on smooth single-contact impacts confirms the theory: the resulting formulation controls $e_{cn}$ consistently across impact configurations, while the apparent variability of $e_E$ follows directly from the coupled dynamics.
0
0
physics.comp-ph 2026-04-20

SD-GLE extrapolates long-time diffusion from short data in disordered systems

Coarse-Grained Dynamics with Spatial Disorder and Non-Markovian Memory

By isolating spatial disorder from memory effects, the model recovers ensemble statistics where standard generalized Langevin equations fail

Figure from the paper full image
abstract click to expand
We introduce the spatial disorder-generalized Langevin equation (SD-GLE), a data-driven method for constructing coarse-grained (CG) dynamics in heterogeneous systems. Unlike conventional CG approaches that rely on a mean-field potential, SD-GLE utilizes a variational Bayesian framework with a random field prior to explicitly disentangle static spatial disorder from viscoelastic friction. Numerical results demonstrate the limits of standard GLEs, whereas SD-GLE accurately extrapolates long-time dynamics to capture the anomalous diffusion crossover from short trajectories and recover the ensemble statistical properties inherent to the disordered nature of these systems.
0
0
physics.comp-ph 2026-04-20

Statistics-informed shuffling cuts bias in resonance fits

Resonance Statistics -Informed Fitting Applied to Automated Cross Section Evaluation

Wigner rules stabilize resonance density in automated cross section evaluation with little loss in data fit quality

Figure from the paper full image
abstract click to expand
This work investigates the use of resonance statistics for resonance evaluation to inform spin group assignment and an alternative fitting objective function beyond the commonly used chi-squared statistic. Resonance statistics -informed methods are applied to the automated resonance fitting framework, developed by N. Walton et al. In this automated framework, the utility of resonance statistics is largely unexplored. The new resonance statistics -informed spin group shuffling algorithm reduces spin group frequency bias seen in the base fitting algorithm. Although resonance statistics -informed optimization produces negligible changes in pointwise cross section agreement, it significantly improves consistency with Wigner level-spacing statistics and stabilizes the fitted resonance density in the presence of model imperfections.
0
0
physics.comp-ph 2026-04-20

Workflow traces code versions to published figures

From Code to Figure: A FAIR-Aligned Data Provenance Chain for Reproducible Simulation Research in Numerical Physics

Version control, testing, logging, and metadata combine to keep simulation results traceable as code evolves over years.

Figure from the paper full image
abstract click to expand
Computational physics increasingly depends on large simulation datasets generated by software that remains under active development for many years. In such settings, reproducibility requires not only well documented data but also explicit links between code versions, simulation inputs, generated outputs, analysis steps, and published figures. Here, we present an integrated workflow for reproducible and FAIR-aligned simulation research in numerical physics. We describe how version control, code review, automated testing, structured logging, metadata-rich output, and standardized post-processing can be combined to support traceability from software development to publication. The presented concepts demonstrated for one particular simulation framework are broadly applicable to computational physics and other data-intensive areas of scientific computing.
0
0
physics.comp-ph 2026-04-20

Probabilistic workflow quantifies uncertainty in fracture transmissivity

Probabilistic Upscaling of Hydrodynamics in Geological Fractures Under Uncertainty

Bayesian correction and neural surrogate together produce physics-consistent permeability ranges for natural shear fractures.

Figure from the paper full image
abstract click to expand
Flow and transport in fractured geological media are strongly controlled by aperture heterogeneity and uncertainty in subsurface characterisation, yet most upscaling approaches rely on deterministic representations of fracture permeability. This study presents a scalable probabilistic workflow that bridges image-based fracture geometry and uncertainty-aware hydraulic predictions across scales. The approach integrates Bayesian correction of aperture-permeability model misspecification, a deep learning surrogate for predicting spatially distributed permeability statistics, and Darcy-scale flow upscaling to propagate uncertainty to effective transmissivity. The workflow is applied to natural shear fractures from core material in the Little Grand Wash Fault damage zone (Utah) and to simplified geometries derived from the same datasets. The Bayesian component quantifies uncertainty due to measurement errors and imperfect constitutive relations, while a Residual U-Net learns the effects of local heterogeneity and spatial correlation on predicted permeability uncertainty. Together, these components generate ensembles of permeability fields that are subsequently upscaled to probabilistic macroscopic flow responses. Results show that common empirical aperture-permeability relations are systematically biased for natural fractures, whereas the proposed probabilistic workflow yields uncertainty-aware permeability estimates consistent with physics-based behaviour. The method captures the impact of channelisation, connectivity, and complex 3D void geometries on transmissivity while quantifying the resulting uncertainty bounds. Computational efficiency arises from the proposed hybrid strategy for probabilistic upscaling, which combines physics-informed and data-driven approaches, preserves Stokes-flow consistency and supports uncertainty propagation without repeated high-fidelity simulations.
0
0
physics.comp-ph 2026-04-20

Neural flux operator keeps conservation laws intact in flow simulations

A Structure-Preserving Graph Neural Solver for Parametric Hyperbolic Conservation Laws

By learning reconstruction and flux rules from classical schemes, the graph solver stays stable across many parameter values and runs orders

Figure from the paper full image
abstract click to expand
Hyperbolic conservation laws govern a wide range of transport-driven dynamics featuring shocks, contact discontinuities, and complex wave interactions, posing distinct challenges for deep-learning-based surrogate modeling. While classical numerical methods provide robust and physically admissible solutions, their computational cost restricts applicability in many-query tasks such as parametric studies and design optimization. Conversely, existing neural surrogates offer rapid inference but often fail to respect intrinsic PDE structures, leading to non-physical artifacts, rollout instability, and poor generalization. We present an interpretable, structure-preserving graph neural solver that bridges classical numerical principles with graph neural networks (GNNs). The network is designed as a learned reconstruction-and-flux operator rather than a black-box state updater, thereby inherently preserving key properties such as local conservation and upwinding. Inspired by Arbitrary high-order DERivatives schemes, we further recast message-passing GNNs as high-order space-time predictors, enabling conservative and stable neural updates with large time steps. Evaluation is performed on challenging supersonic flow benchmarks spanning broad parametric variations in geometry, initial/boundary conditions, and flow regimes. The neural solver achieves superior long-horizon rollout stability and accuracy compared with strong surrogate baselines, outperforms low-order discretizations, and delivers orders-of-magnitude runtime speedups over high-resolution simulations.
0
0
physics.comp-ph 2026-04-17

qFHRR cuts bits per dimension to 3-4 while keeping FHRR properties

qFHRR: Rethinking Fourier Holographic Reduced Representations through Quantized Phase and Integer Arithmetic

Discrete phase indices and modular arithmetic support integer-only binding and similarity with little loss to the original structure.

Figure from the paper full image
abstract click to expand
Fourier Holographic Reduced Representations (FHRR) provide a compositional framework for encoding structured information with complex-valued hypervectors. FHRR rely on floating-point arithmetic, which limits their efficiency and applicability on resource-constrained hardware. We introduce qFHRR, a quantized phase formulation of FHRR. In this representation, each dimension is encoded as a discrete phase index, enabling integer-only implementations of binding, unbinding, similarity, and bundling through modular arithmetic and lookup tables. We show that qFHRR preserves the algebraic properties of complex FHRR while significantly reducing the number of bits per dimension, from 64-bit complex representations to as few as 3--4 bits. Across a range of phase resolutions, qFHRR maintains high fidelity to the complex baseline, achieving strong performance even at low bit-widths. We further demonstrate that qFHRR preserves the spatial similarity structure induced by fractional binding. This enables accurate multi-object memory representations despite significant quantization. These results indicate that qFHRR provides an efficient and scalable alternative to complex FHRR, preserving the algebraic operations and similarity structure of the representation. It also reduces memory footprint and enables hardware-friendly implementations.
0
0
physics.comp-ph 2026-04-17

The paper introduces a numerical method for simulating phase changes such as boiling on…

Sharp-interface VOF method for phase-change simulations on unstructured meshes

A sharp-interface VOF method for phase-change simulations on unstructured meshes computes evaporation rates from local temperature…

Figure from the paper full image
abstract click to expand
Unstructured meshes are among the most versatile approaches for capturing non-canonical geometries in fluid dynamics simulations. Despite this, most high-fidelity first-principles phase-change models are developed and applied on structured meshes. We present a phase-change simulation method for unstructured meshes that combines the algebraic Volume-of-Fluid (VOF) technique with geometric interface reconstruction, implemented in an in-house open-source CFD code. Phase-change rates are computed from local temperature gradients evaluated at the reconstructed interface, without empirical closure models, using a reconstruction procedure that operates on arbitrary polyhedral cells. Because the method relies on the standard finite-volume framework, it can be integrated into other cell-centred codes supporting unstructured meshes. The approach is validated against the one-dimensional Stefan and Sucking problems and the three-dimensional Scriven bubble growth on both hexahedral and polyhedral meshes, showing good agreement with analytical solutions in all three cases. A detailed analysis of the Scriven problem reveals that the interface-modified least-squares gradient stencil on Cartesian meshes overestimates the interfacial temperature gradient, producing a persistent overshoot of the analytical bubble radius and a coherent four-fold anisotropy that elongates the bubble along grid diagonals. On polyhedral meshes, the irregular face orientations eliminate both effects, yielding isotropic growth and monotonic convergence. Finally, we demonstrate the framework on turbulent upward co-current annular boiling flow, where early transient results are qualitatively consistent with a previous LES study and experimental observations of wave-modulated evaporation.
0
0
physics.comp-ph 2026-04-17

Meshless method achieves spectral accuracy for axisymmetric vesicles

Spectrally Accurate Simulation of Axisymmetric Vesicle Dynamics

Adaptive reparameterization and gauge dynamics reduce the number of harmonics while error control and precise quadrature maintain accuracy.

Figure from the paper full image
abstract click to expand
We present a meshless numerical method for simulating the dynamics of axisymmetric vesicles in a viscous medium. Key innovations include: (1) adaptive reparameterization based on local length scales, reducing the number of required harmonics; (2) gauge dynamics for maintaining optimal parameterization; (3) error control near the symmetry axis; and (4) spectrally accurate quadrature schemes for singular integrals. The method achieves high accuracy and computational efficiency for simulating lipid bilayer dynamics and related problems in soft matter physics.
0
0
physics.comp-ph 2026-04-16 2 theorems

Bulk measurements recover microstructure statistics

Distributional Inverse Homogenization

Distributional inversion extracts global microstructural distributions from many macroscopic observations without local probes.

Figure from the paper full image
abstract click to expand
For many materials, macroscopic mechanical behavior is determined by an intricate microstructure. Understanding the relation between these two scales helps scientists and engineers design better materials. The relation which maps microstructure to bulk mechanical properties can be understood via the well-established theory of homogenization. However inverting the homogenization process, to recover microstructural information from measured macroscopic properties, is fraught with difficulties because of the averaging processes that underlie homogenization. Therefore, scientists and engineers usually need recourse to more invasive, often highly localized, investigations to learn about a microstructure. In this work, we develop a noninvasive methodology by which one can leverage large collections of measured bulk mechanical properties to learn information about the statistics of microstructure at a global level. We call this, distributional inverse homogenization. We study this problem in one and two dimensions, considering both periodic and stochastic homogenization. We demonstrate the methodology in the context of 2D Voronoi constructions and underpin the observed empirical success with theory in 1D. We also show how the natural spatial variability of microstructure can be exploited to gather data that enables distributional inversion. And we concurrently learn a surrogate model, approximating the homogenization map, that accelerates the resulting computations in this setting. The work formulates a new class of inverse problems, bridging ideas from probability and homogenization to facilitate the learning of microstructural material variability from macroscopic measurements.
0
0
physics.comp-ph 2026-04-16

Bulk measurements recover microstructure statistics

Distributional Inverse Homogenization

New inversion method uses collections of macroscopic data to learn global microstructural distributions without local probes.

Figure from the paper full image
abstract click to expand
For many materials, macroscopic mechanical behavior is determined by an intricate microstructure. Understanding the relation between these two scales helps scientists and engineers design better materials. The relation which maps microstructure to bulk mechanical properties can be understood via the well-established theory of homogenization. However inverting the homogenization process, to recover microstructural information from measured macroscopic properties, is fraught with difficulties because of the averaging processes that underlie homogenization. Therefore, scientists and engineers usually need recourse to more invasive, often highly localized, investigations to learn about a microstructure. In this work, we develop a noninvasive methodology by which one can leverage large collections of measured bulk mechanical properties to learn information about the statistics of microstructure at a global level. We call this, distributional inverse homogenization. We study this problem in one and two dimensions, considering both periodic and stochastic homogenization. We demonstrate the methodology in the context of 2D Voronoi constructions and underpin the observed empirical success with theory in 1D. We also show how the natural spatial variability of microstructure can be exploited to gather data that enables distributional inversion. And we concurrently learn a surrogate model, approximating the homogenization map, that accelerates the resulting computations in this setting. The work formulates a new class of inverse problems, bridging ideas from probability and homogenization to facilitate the learning of microstructural material variability from macroscopic measurements.
0
0
physics.comp-ph 2026-04-16

Active learning embeds extrapolative atomic environments from large simulations into…

NEPMaker: Active learning of neuroevolution machine learning potential for large cells

Identifying extrapolative atomic arrangements on the fly and embedding them into periodic structures lets large simulations help train more

Figure from the paper full image
abstract click to expand
Machine learning potentials (MLPs) achieve near first-principles accuracy but often fail for atomic environments outside the training distribution. Active learning can mitigate this limitation; however, its application to large-scale simulations is hindered by the prohibitive cost of labeling entire configurations. Here, we develop a D-optimality-driven active learning framework for the neuroevolution potential (NEP) implemented within the GPUMD package, named NEPMaker. Extrapolative atomic environments are identified on-the-fly and embedded into locally periodic structures, where boundary atoms are optimized to remain close to the training distribution. This strategy enables large-scale simulations to directly contribute to dataset construction, significantly reducing extrapolation errors while improving model robustness and transferability. The proposed framework provides a scalable route for constructing reliable machine learning potentials in complex materials systems, including those involving defects, interfaces, and phase transitions.
0
0
physics.comp-ph 2026-04-16

Dual-graph operator extrapolates FSI flows 50 steps with 0.168 error

AeTHERON: Autoregressive Topology-aware Heterogeneous Graph Operator Network for Fluid-Structure Interaction

Architecture copies immersed-boundary stencils so one short training run covers a grid of flapping-fin parameters without retraining.

Figure from the paper full image
abstract click to expand
Surrogate modeling of body-driven fluid flows where immersed moving boundaries couple structural dynamics to chaotic, unsteady fluid phenomena remains a fundamental challenge for both computational physics and machine learning. We present AeTHERON, a heterogeneous graph neural operator whose architecture directly mirrors the structure of the sharp-interface immersed boundary method (IBM): a dual-graph representation separating fluid and structural domains, coupled through sparse cross-attention that reflects the compact support of IBM interpolation stencils. This physics-informed inductive bias enables AeTHERON to learn nonlinear fluid-structure coupling in a shared high-dimensional latent space, with continuous sinusoidal time embeddings providing temporal generalization across lead times. We evaluate AeTHERON on direct numerical simulations of a flapping flexible caudal fin, a canonical FSI benchmark featuring leading-edge vortex formation, large membrane deformation, and chaotic wake shedding across a 4x5 parameter grid of membrane thickness (h* = 0.01-0.04) and Strouhal number (St = 0.30-0.50). As a proof-of-concept, we train on the first 150 timesteps of a representative case using a 70/30 train/validation split and evaluate on the fully unseen extrapolation window t=150-200. AeTHERON captures large-scale vortex topology and wake structure with qualitative fidelity, achieving a mean extrapolation MAE of 0.168 without retraining, with error peaking near flapping half-cycle transitions where flow reorganization is most rapid -- a physically interpretable pattern consistent with the nonlinear fluid-membrane coupling. Inference requires milliseconds per timestep on a single GPU versus hours for equivalent DNS computation. This is a continuously developing preprint; results and figures will be updated in subsequent versions.
0
0
physics.comp-ph 2026-04-15

Generative model plus genetic search cuts catalyst barrier by 30%

Hierarchical generative modeling for the design of multi-component systems

Framework optimizes surrounding molecules around a fixed transition state to lower activation energy in multi-component systems.

Figure from the paper full image
abstract click to expand
The functionality of catalysts, enzymes, and supramolecular assemblies emerges not from individual molecules alone, but from the subtle interplay between multiple components arranged in complex systems. Designing such systems is a grand challenge, the combinatorial explosion of possible chemical compositions and spatial arrangements makes brute-force exploration infeasible, while many current generative approaches remain limited to isolated molecules. In this work, we introduce a hierarchical generative optimization framework that overcomes this barrier by coupling a genetic algorithm for configurational search with a generative model for molecular design. This closed-loop approach enables simultaneous refinement of geometry and composition, efficiently steering discovery toward systems with targeted functionality. As a proof of concept, we design catalytic environments for the Claisen rearrangement of p-tolyl ether by optimizing surrounding components around a fixed reference transition-state geometry. Despite this constraint during the search phase, post-hoc validation via Climbing-Image Nudged Elastic Band calculations confirm a 30% reduction in activation barrier. Beyond this example, our framework provides a general strategy for data-driven discovery of functional multi-component systems, opening the door to automated design of catalysts, enzyme active sites, and advanced materials. Scientific contribution. The study presents a closed loop generative framework that enables joint optimization of molecular components and their spatial organization in multi-component systems. The method moves generative molecular design beyond single molecules toward larger and more complex systems.
0
0
physics.comp-ph 2026-04-15

LLM agent runs full research loop on 111 physics papers

Towards grounded autonomous research: an end-to-end LLM mini research loop on published computational physics

It raises concerns on 42% of them, most visible only after new calculations, and produces a publishable Comment revising a Nature paper's结论.

Figure from the paper full image
abstract click to expand
Recent autonomous LLM agents have demonstrated end-to-end automation of machine-learning research. Real-world physical science is intrinsically harder, requiring deep reasoning bounded by physical truth and, because real systems are too complex to study in isolation, almost always built on existing literature. We focus on the smallest meaningful unit of such research, a mini research loop in which an agent reads a paper, reproduces it, critiques it, and extends it. We test this loop in two complementary regimes: scale and depth. At scale, across 111 open-access computational physics papers, an agent autonomously runs the read-plan-compute-compare loop and, without being asked to critique, raises substantive concerns on ~42% of papers - 97.7% of which require execution to surface. In depth, for one Nature Communications paper on multiscale simulation of a 2D-material MOSFET, the agent runs new calculations missing from the original and produces, unsupervised, a publishable Comment -- composed, figured, typeset, and PDF-iterated -- that revises the paper's headline conclusion.
0
0
physics.comp-ph 2026-04-14

Bessel functions create passive element for tissue modeling

On mathematical characterization of a Bessel functions-based passive element in electronic circuits

Impedance and admittance via modified Bessel functions keep analyticity, passivity and stability while fitting biological tissue data

Figure from the paper full image
abstract click to expand
Modeling relaxation phenomena in complex media is central to understanding multiscale dynamics in materials science, bioengineering and condensed matter physics. Existing fractional-order models, while flexible, sometimes lack physical interpretability, closed-form time-domain expressions, and compatibility with physically realizable architectures. In this work, we propose a novel passive element whose impedance and admittance are defined analytically via modified Bessel functions of first kind, through the electro-mechanical analogy. This approach preserves key physical properties such as analyticity, passivity, BIBO (bounded-input, bounded-output) stability and monotonicity, while enabling the direct use of its time-domain representation in simulations and system modeling. As an application, we demonstrate that this model accurately captures the broadband dispersive behavior of biological tissues, offering a physically grounded and tractable alternative to fractional-order formulations.
0
0
physics.comp-ph 2026-04-13

Phase-field model sets interface and bulk toughness separately

A unified sharp-diffusive phase-field model for bulk and interfacial cohesive fracture

An analytical source term with the Ω²-model lets cohesive-law parameters come directly from local material properties, eliminating extra mes

Figure from the paper full image
abstract click to expand
In traditional phase-field modeling of multiphase materials, a significant challenge arises from the non-local nature of fracture energy regularization, where interfacial toughness is inherently coupled with the properties of the surrounding bulk phases. Achieving consistency with prescribed material properties typically necessitates complex corrections and exceptionally fine local mesh refinement near the interfaces. To address this fundamental issue, we leverage the capacity of the recently proposed $\Omega^2$-model to manifest Dirac-like damage concentration and emergent displacement discontinuities, while introducing an analytical, strongly localized interfacial source term $q_{\phi}$ into the phase-field formulation. It should be emphasized that the ``sharp" nature of the proposed model manifests as a naturally emergent strong discontinuity within a continuum framework, fundamentally distinguishing it from inherently discrete approaches such as cohesive element method. This allows for the independent and precise control of interface toughness in a straightforward manner. Theoretical analysis further reveals that the proposed framework can describe the cohesive failure of both bulk and interfacial regions using a unified set of parametric equations for the cohesive law, where the model parameters are directly determined by the local material properties without the need for additional corrections. The model's versatility is numerically validated through a series of benchmarks. The results confirm that the proposed model not only accurately reproduces diverse interfacial cohesive laws but also captures the intricate competition between interfacial debonding and matrix cracking. This sharp-diffusive phase-field model may provide a robust and computationally efficient tool for predicting complex fracture trajectories in sophisticated engineering materials.
0
0
physics.comp-ph 2026-04-13

Sign blocking extracts fermionic energies from signed samples

A sign-blocking method for mitigating the fermion sign problem

Post-processing data blocks uncover energy-sign correlations to bypass the sign problem in Hubbard model simulations.

Figure from the paper full image
abstract click to expand
The fermion sign problem remains the primary obstacle in simulating the thermodynamic properties of various fermionic systems. In this work, we present a sign-blocking method to mitigate the numerical instability inherent in the sign problem. In the sign-blocking method, the Monte Carlo importance sampling remains identical to traditional methods; instead, the sign-blocking method is applied during the post-processing of signed samples. Given the significant progress in simulating the 2D Fermi-Hubbard model over the past decade, a wealth of energy benchmarks is available for comparison. Consequently, we use the 2D Fermi-Hubbard model as a benchmark to validate the sign-blocking method. Surprisingly, our results align exceptionally well with existing state-of-the-art benchmarks, even in regimes previously considered challenging. The physical mechanism of the sign-blocking method lies in uncovering the correlation between energy and sign factors through data blocking, thereby successfully inferring the fermionic system's energy. Our findings suggest that the sign-blocking method holds promise for complex quantum systems, particularly when combined with appropriate simulation techniques such as auxiliary-field formalisms that trace out the fermionic degrees of freedom.
0
0
physics.comp-ph 2026-04-13

Constrained reconstruction fixes negative levels in cross-section tables

Admissible Reconstruction of Reaction-Channel Levels on Fixed Subgroup Support for Cross-Section-Space Probability Table Constructions

Retaining low-order aggregates exactly and least-squares fitting the rest restores nonnegativity with modest accuracy trade-off.

Figure from the paper full image
abstract click to expand
In cross-section-space probability table constructions, reaction-channel levels are reconstructed on fixed total-subgroup nodes and probabilities. Although the standard full-matching reconstruction is uniquely determined, it does not in general preserve componentwise nonnegativity of the channel levels. We impose nonnegativity both for physical interpretability and because, on fixed positive total-subgroup nodes and probabilities, it provides a sufficient structural condition for nonnegativity of the folded effective cross section over all dilutions. We therefore formulate an admissible constrained reconstruction problem on the fixed subgroup support, in which selected low-order channel information is retained exactly and the remaining matching conditions are fitted in a weighted least-squares sense. After null-space reduction, the problem becomes a convex optimization problem with linear inequality constraints. For the single-retention formulation, nonnegative feasibility is automatic when the retained \(0\)-order aggregate is nonnegative, whereas for a two-retention variant it additionally requires a compatibility condition with the fixed total-subgroup nodes. Numerical results for a representative U-238 capture benchmark show that nonnegativity violations are confined to a small subset of energy groups. On these groups, the admissible reconstruction restores nonnegativity, but at the cost of some response-level deterioration relative to full matching. In the comparison, the single-retention formulation shows the more stable overall behavior.
0
0
physics.comp-ph 2026-04-13

Power series method conserves particle energy 4-13 orders better than RK

High-Accuracy Numerical Solutions of Particle Motion in Static Magnetic Fields

In static magnetic fields the Parker-Sochacki integrator also runs faster at matched accuracy and stays stable for electrons where other Run

Figure from the paper full image
abstract click to expand
The Parker-Sochacki (PS) method is investigated as an alternative to Runge-Kutta (RK) methods for solving the Lorentz equations of motion for a charged particle in a static magnetic field. Traditional methods, including fixed-time-step fourth-order RK, adaptive Dormand-Prince RK, and Gauss-Legendre Runge-Kutta (RKG), advance the solution by sampling derivative estimates at selected points to approximate the solution over a time increment. In contrast, the PS method uses a power series expansion in time that is specific to the system of equations, which is a fundamentally different approach. We assess the accuracy and long-term stability of the RK, RKG, and PS methods for three static magnetic fields: uniform, hyperbolic tangent, and dipole, with the RKG method included only for the dipole problem. The PS method results in a 4 to 13 orders-of-magnitude improvement in kinetic energy conservation over the RK methods. When the methods are compared at matched target kinetic energy error, the PS method was substantially faster than RK4, the method with the shortest runtime under identical fixed-time-step conditions. For the dipole field problem, the PS method had the lowest kinetic energy error and had runtimes 4 to 5 times shorter than RKG when using the same fixed time step for proton runs. The PS method was the only method in this study to maintain accuracy and stability for all problems for both protons and electrons; the RKG method failed on all electron runs in the dipole problem. We further show that, over sufficiently long integrations in inhomogeneous magnetic fields, the symplectic RKG may exhibit secular growth in energy error. Overall, these results indicate that the PS method provides a computationally efficient and highly accurate alternative to the symplectic RKG and standard RK methods.
0
0
physics.comp-ph 2026-04-13

One optimization yields differentiable free energy surface and rare-event samples

Differentiable free energy surface: a variational approach to directly observing rare events using generative deep-learning models

Reversible collective-variable extension creates a latent space where energy is directly accessible, bypassing the need for pre-generated MD

Figure from the paper full image
abstract click to expand
Rare events are central to the evolution of complex many-body systems, characterized as key transitional configurations on the free energy surface (FES). Conventional methods require adequate sampling of rare event transitions to obtain the FES, which is computationally very demanding. Here we introduce the variational free energy surface (VaFES), a dataset-free framework that directly models FESs using tractable-density generative models. Rare events can then be immediately identified from the FES with their configurations generated directly via one-shot sampling of generative models. By extending a coarse-grained collective variable (CV) into its reversible equivalent, VaFES constructs a latent space of intermediate representation in which the CVs explicitly occupy a subset of dimensions. This latent-space construction preserves the physical interpretability and transparent controllability of the CVs by design, while accommodating arbitrary CV formulations. The reversibility makes the system energy exactly accessible, enabling variational optimization of the FES without pre-generated simulation data. A single optimization yields a continuous, differentiable FES together with one-shot generation of rare-event configurations. Our method can reproduce the exact analytical solution for the bistable dimer potential and identify a chignolin native folded state in close alignment with the experimental NMR structure. Our approach thus establishes a scalable, systematic framework for advancing the study of complex statistical systems.
0
0
physics.comp-ph 2026-04-10

AI accelerators run Monte Carlo simulation of 4 trillion atoms

SMC-AI: Scaling Monte Carlo Simulation to Four Trillion Atoms with AI Accelerators

SMC-AI framework achieves 32 times larger systems and 1.3 times higher throughput than prior records on NPU and GPU clusters.

Figure from the paper full image
abstract click to expand
The rapid advancement of deep learning is reshaping the hardware design landscape toward AI tasks, posing fundamental challenges for HPC workloads such as atomistic simulation. Here we present SMC-AI, a general algorithmic framework that extends the SMC-X method for efficient canonical Monte Carlo simulation on AI accelerators, including GPUs and NPUs, while maintaining extreme scalability. The implementation of SMC-AI on an NPU cluster reaches unprecedented performance, achieving MC simulation of 4 trillion atoms on 4096 NPU dies. This represents the largest ML-accelerated atomistic simulation reported, delivering 32X system size and 1.3X throughput than previous records, with a relatively small computational budget. Excellent strong and weak scaling efficiency are reached for both the NPU and GPU implementation. By decoupling ML models from simulation, SMC-AI creates an abstraction that facilitates integration and porting of diverse ML models, laying a foundation for the future development of scalable scientific software.
0
0
physics.comp-ph 2026-04-10 2 theorems

Direction-aware topology lifts Young's modulus prediction accuracy

Direction-aware topological descriptors for Young's modulus prediction in porous materials

Embedding the loading axis into persistent homology and Euler profiles improves results over standard TDA, especially as anisotropy rises.

abstract click to expand
Classical topological descriptors used in topological data analysis (TDA) are invariant under permutations of spatial axes and therefore cannot represent the loading direction, which is essential for modeling anisotropic mechanical response. Here, this limitation is addressed by introducing a \emph{direction-aware TDA framework} in which the compression axis is explicitly embedded into filtration functions used to compute both persistent homology and Euler characteristic profile descriptors. Across multiple porous-material datasets spanning a broad range of structural anisotropy, direction-aware descriptors yield higher predictive accuracy than their direction-agnostic counterparts, with performance gains that increase systematically with anisotropy. Notably, direction-aware descriptors remain competitive and often improve $R^2$ even for nominally isotropic ensembles, indicating sensitivity to mechanically relevant directional organization beyond bulk anisotropy measures. When used as inputs to gradient-boosted tree models, the proposed descriptors approach the accuracy of convolutional neural networks trained directly on voxelized structures while retaining a compact, transferable representation. The study considers multiple datasets spanning weak to strong anisotropy, enabling systematic validation of direction-aware topology across regimes. Overall, the results establish direction-aware TDA as a general route for linking porous structure to direction-dependent elastic properties and motivate its use in anisotropic materials modeling problems where a preferred direction naturally arises.
0
0
physics.comp-ph 2026-04-10 3 theorems

Reputation-linked exploration boosts cooperation in learning agents

Reinforcement learning with reputation-based adaptive exploration promotes the evolution of cooperation

Q-learning agents that explore less when reputation is high and more when it is low reach higher collective cooperation through asymmetric更新

Figure from the paper full image
abstract click to expand
Multi-agent reinforcement learning serves as an effective tool for studying strategy adaptation in evolutionary games. Although prior work has integrated Q-learning with reputation mechanisms to promote cooperation, most existing algorithms adopt fixed exploration rates and overlook the influence of social context on exploratory behavior. In practice, individuals may adjust their willingness to explore based on their reputation and perceived social standing. To address this, we propose a Q-learning model that couples exploration rates with local reputation differences and incorporates asymmetric, state-dependent reputation updates. Our results show that each mechanism independently promotes cooperation, and their combination yields a reinforcing effect. The joint mechanism enhances cooperation by making ``high reputation--low exploration, low reputation--high exploration'', while adjusting reputation updates to amplify cooperative gains at low status and defection penalties at high status. This study thus offers insights into how social evaluation can shape learning behavior in complex environments.
0
0
physics.comp-ph 2026-04-09 2 theorems

GPU timestepper speeds alpha particle Monte Carlo in stellarators

CATAPULT: A CUDA-Accelerated Timestepper for Alpha Particles Using Local Tricubics

Local tricubics on CUDA deliver large speedups over CPU codes for both static fields and shear Alfven waves.

Figure from the paper full image
abstract click to expand
We introduce a CUDA-Accelerated Timestepper for Alpha Particles Using Local Tricubics (CATAPULT) for use in Monte Carlo calculations of alpha particle confinement in stellarators. Our GPU implementation is significantly faster than existing parallelized CPU implementations, and handles both equilibrium magnetic fields and Shear Alfven Waves. We test our implementation on several example stellarators to exhibit both the speed and correctness of our code. The source code is included in the firm3d Python package.
0
0
physics.comp-ph 2026-04-09

Faster rotation speeds up mixing in stirred bed reactors

Granular mixing and flow dynamics in horizontal stirred bed reactors

Simulations find higher speeds accelerate axial homogenization while higher fill levels slow it, with clear trade-offs in circulation and 50

Figure from the paper full image
abstract click to expand
Horizontal stirred bed reactors (HSBRs) are used in gas--phase polyolefin production, where efficient solids mixing and controlled residence time distributions are essential for product quality and stability. Despite their industrial relevance, the influence of operating conditions on granular flow and mixing in HSBRs is not well understood. Discrete Element Method (DEM) simulations are used to study the effects of rotation speed and fill level on particle motion, mixing, and axial transport in a lab--scale HSBR. An industrial--grade polypropylene powder is modelled using calibrated contact parameters. Mixing is quantified using the Lacey index in axial (z) and cross--sectional (xy) directions. Particle circulation is characterised via cycle--time analysis and a coarse--grained angular velocity field. Axial dispersion coefficients are obtained from particle trajectories using both Einstein--type and cycle--based approaches, and validated with a diffusion model predicting the axial Lacey index. Results show that axial mixing depends strongly on rotation speed and fill level: higher rotation speeds accelerate homogenization, while higher fill levels slow mixing. Cross--sectional mixing is mainly sensitive to rotation speed, with fill--level effects diminishing at higher speeds. Cycle time decreases with increasing rotation speed and fill level, indicating enhanced circulation. Axial dispersion increases with rotation speed but decreases with fill level, with consistent results across methods. These findings reveal trade--offs between axial mixing, circulation, and dispersion, highlighting the need to balance operating conditions and demonstrating the capability of DEM to support HSBR optimisation.
0
0
physics.comp-ph 2026-04-09

Guided diffusion recovers full trajectories in gas-phase kinetics

Modelling Gas-Phase Reaction Kinetics with Guided Particle Diffusion Sampling

The method builds complete space-time solutions from sparse data and stays accurate for reaction parameters it never saw in training.

Figure from the paper full image
abstract click to expand
Physics-guided sampling with diffusion priors has recently shown strong performance in solving complex systems of partial differential equations (PDEs) from sparse observations. However, these methods are typically evaluated on benchmark problems that do not fully demonstrate their ability to generate temporally consistent solutions of time-dependent PDEs, often focusing instead on reconstructing a single snapshot. In this work, we apply these methods to gas-phase reaction kinetics problems governed by the advection-reaction-diffusion (ARD) equation, providing a setting that more closely reflects realistic laboratory experiments. We demonstrate that guided sampling can be used to reconstruct full spatiotemporal trajectories, rather than isolated states. Furthermore, we show that these methods generalise to previously unseen parameter regimes, highlighting their potential for real-world applications.
0
0
physics.comp-ph 2026-04-09

Database supplies dissociation trajectories for 19,037 complexes

A Massively Scalable Ligand-Protein Dissociation Dynamic Database Derived from Atomistic Molecular Modelling

0.3 billion frames and reweighted rates classify systems into three mechanistic types to support kinetic modeling in drug design.

Figure from the paper full image
abstract click to expand
Understanding the kinetics of drug-protein interactions is paramount for drug design, yet the field lacks large-scale, dynamic data to move beyond static structural analysis. Here, we present DD-03B, a massively scalable database providing dynamic, all-atom dissociation trajectories for a broad set of ligand-protein complexes. Utilising and extending a validated computational pipeline, we generated dissociation trajectories for 19,037 ligand-protein complexes sourced from PDBbind+v2020R1, resulting in a repository of approximately 0.3 billion simulation frames totalling 40 TB in size. For these systems-which possess experimental binding affinities (kd) but typically lack measured koff rates-we computed and assigned dissociation rate constants through trajectory reweighting. Our analysis reveals that protein-ligand complexes can be categorised into three mechanistic types (pathway-dominant, open-pocket, and entropy-pocket systems), each requiring distinct strategies for accurate kinetic characterisation. Together with our previously released DD-13M, DD-03B forms the core of the expandable Dissociation Dynamic Database (DDD) project, which will be continuously augmented with new trajectories. This large-scale, publicly available resource establishes a critical foundation for training and benchmarking next-generation generative AI models to predict and optimise drug-protein dissociation kinetics.
0
0
physics.comp-ph 2026-04-08 Recognition

DeepONets surrogate SWAN model for wave force predictions

Operator Learning for Surrogate Modeling of Wave-Induced Forces from Sea Surface Waves

The operator network matches radiation stress gradients and wave heights from full simulations in steady-state tests, allowing faster wave–c

Figure from the paper full image
abstract click to expand
Wave setup plays a significant role in transferring wave-induced energy to currents and causing an increase in water elevation. This excess momentum flux, known as radiation stress, motivates the coupling of circulation models with wave models to improve the accuracy of storm surge prediction, however, traditional numerical wave models are complex and computationally expensive. As a result, in practical coupled simulations, wave models are often executed at much coarser temporal resolution than circulation models. In this work, we explore the use of Deep Operator Networks (DeepONets) as a surrogate for the Simulating WAves Nearshore (SWAN) numerical wave model. The proposed surrogate model was tested on three distinct 1-D and 2-D steady-state numerical examples with variable boundary wave conditions and wind fields. When applied to a realistic numerical example of steady state wave simulation in Duck, NC, the model achieved consistently high accuracy in predicting the components of the radiation stress gradient and the significant wave height across representative scenarios.
0
0
physics.comp-ph 2026-04-08 Recognition

One model solves Fokker-Planck transients for any initial state and parameters

A deep learning framework for jointly solving transient Fokker-Planck equations with arbitrary parameters and initial distributions

After single training the network predicts solutions at any time using Gaussian mixtures mapped to a shared latent space.

Figure from the paper full image
abstract click to expand
Efficiently solving the Fokker-Planck equation (FPE) is central to analyzing complex parameterized stochastic systems. However, current numerical methods lack parallel computation capabilities across varying conditions, severely limiting comprehensive parameter exploration and transient analysis. This paper introduces a deep learning-based pseudo-analytical probability solution (PAPS) that, via a single training process, simultaneously resolves transient FPE solutions for arbitrary multi-modal initial distributions, system parameters, and time points. The core idea is to unify initial, transient, and stationary distributions via Gaussian mixture distributions (GMDs) and develop a constraint-preserving autoencoder that bijectively maps constrained GMD parameters to unconstrained, low-dimensional latent representations. In this representation space, the panoramic transient dynamics across varying initial conditions and system parameters can be modeled by a single evolution network. Extensive experiments on paradigmatic systems demonstrate that the proposed PAPS maintains high accuracy while achieving inference speeds four orders of magnitude faster than GPU-accelerated Monte Carlo simulations. This efficiency leap enables previously intractable real-time parameter sweeps and systematic investigations of stochastic bifurcations. By decoupling representation learning from physics-informed transient dynamics, our work establishes a scalable paradigm for probabilistic modeling of multi-dimensional, parameterized stochastic systems.
0
0
physics.comp-ph 2026-04-08 2 theorems

Relaxation step enforces exact mass and energy conservation in SP simulations

Efficient High-order Mass-conserving and Energy-balancing Schemes for Schr\"odinger-Poisson Equations

Post-processing added to any IMEX Runge-Kutta scheme keeps invariants intact up to rounding error, including for time-varying coefficients.

Figure from the paper full image
abstract click to expand
We study relaxation-based approaches for conserving mass and energy in the numerical solution of Schr\"odinger-Poisson (SP) type systems. Relaxation-based methods offer a general approach that can be applied as post-time step processing to achieve conservation with any time-stepping scheme. Here we study two types of relaxation techniques applied to implicit-explicit Runge-Kutta schemes, with Fourier collocation in space. We also study SP equations with time-varying coefficients (which appear naturally in cosmology) where energy is not conserved but satisfies a balance equation. We show that the fully-discrete system conserves both mass and energy (or satisfies the balance equation in case of time-varying coefficients), up to rounding errors. The effectiveness of these methods is demonstrated via numerical examples, including a three-dimensional cosmological simulation.
0
0
physics.comp-ph 2026-04-08 Recognition

Chemical embeddings boost CVT archive performance in molecule design

CVT Archives and Chemical Embedding Measures for Multi-Objective Quality Diversity in Molecular Design

CVT archives using ChemBERTa-2 embeddings achieve higher hypervolume and fill more niches than uniform grids for nonlinear optical molecules

Figure from the paper full image
abstract click to expand
Nonlinear optical (NLO) materials are essential for photonic technologies, yet discovering optimal NLO molecules requires balancing multiple competing objectives across vast chemical spaces. Previous work showed that Multi-Objective MAP-Elites (MOME) with grid-based archives discovers diverse, high-quality molecules for electro-optic applications. However, uniform grid partitioning wastes archive capacity on chemically infeasible regions while undersampling high-density areas. We apply MOME with Centroidal Voronoi Tessellation (CVT) archives whose cells are defined by learned embeddings from ChemBERTa-2 Multi-Task Regression reduced via UMAP, capturing chemical similarity beyond simple structural features. We investigate a four-objective NLO molecular design problem: maximizing the $\beta / \gamma$ hyperpolarizability ratio, constraining HOMO-LUMO gap and linear polarizability to target ranges, and minimizing energy per atom. Our results demonstrate that embedding-based measures in CVT archives yield significantly higher median global hypervolume and multi-objective quality diversity scores, while filling nearly all native archive niches.
0
0
physics.comp-ph 2026-04-07 Recognition

Proton quantum effects stiffen H3S phonons but barely affect electrons

Proton Quantum Effects in H₃S Electronic Structure: A Multicomponent DFT study via Nuclear-Electronic Orbital Method

Calculations separate the contributions and show the Tc drop upon deuteration comes from lattice vibrations, not electronic structure.

Figure from the paper full image
abstract click to expand
We investigate the impact of the quantum effects of protons on the electronic structure of high-pressure H$_3$S, a benchmark hydrogen-rich superconductor with a critical temperature ($T_c$) exceeding 200 K. Using Nuclear-Electronic Orbital Density Functional Theory (NEO-DFT), we treat hydrogen nuclei quantum mechanically on the same footing as electrons within a first-principles framework. Our calculations reveal that nuclear quantum effects (NQEs) induce subtle modifications to the electronic band structure and density of states (DOS) near the Fermi energy, including features associated with van Hove singularities. However, the resulting changes in the DOS would increase $T_c$ by only a few percent. On the other hand, calculations of the phonon dispersion with the NEO-DFT method show large changes in the hydrogen-dominated phonons that arise from a stiffening of the S-H bonds due to NQEs. These findings imply that the experimentally observed reduction in $T_c$ upon deuteration arises predominantly from changes in the phonon properties, while NQEs-induced modifications to the electronic structure itself are minimal.
0
0
physics.comp-ph 2026-04-07 Recognition

Adaptive Bayesian optimization automates LEED surface reconstruction

Physics-informed automated surface reconstructing via low-energy electron diffraction based on Bayesian optimization

The method places the full multiple-scattering model inside a trust-region loop that jointly fits atomic coordinates and experimental shifts

abstract click to expand
Low-energy electron diffraction (LEED) is a cornerstone technique for determining surface atomic structures[heldStructureDeterminationLowenergy2025], yet the quantitative analysis of electron diffraction intensity as a function of incident electron energy -- that is, LEED-\textit{I(V)} analysis -- remains a complex inverse problem. In this work, we tackle quantitative LEED-\textit{I(V)} analysis based on physics-informed Bayesian optimization (BO). By embedding multiple scattering LEED forward models directly into a trust-region BO loop, our approach simultaneously optimizes both structural and experimental parameters, adaptively adjusting trust regions for efficient exploration of complex non-convex parameter spaces without manual intervention. The robustness and scalability of the approach are demonstrated using the Ag(100)-(1$\time$1) and Fe\textsubscript{2}O\textsubscript{3}(1$\overline{1}$02)-(1$\time$1) surfaces as examples. Our work establishes a general framework for solving inverse problems in various characterization techniques, unlocking a physics-informed efficient, reproducible, and autonomous paradigm.
0
0
physics.comp-ph 2026-04-06

Human review step lets LLMs code quantum algorithms reliably

From Paper to Program: Accelerating Quantum Many-Body Algorithm Development via a Multi-Stage LLM-Assisted Workflow

Multi-stage workflow with checked technical specs turns papers into working DMRG code that passes physics tests every time, in under a day.

Figure from the paper full image
abstract click to expand
Large language models (LLMs) can generate code rapidly but remain unreliable for scientific algorithms whose correctness depends on structural assumptions rarely explicit in the source literature. We introduce a multi-stage LLM-assisted workflow that separates theory extraction, formal specification, and code implementation. The key step is an intermediate technical specification -- produced by a dedicated LLM agent and reviewed by the human researcher -- that externalizes implementation-critical computational knowledge absent from the source literature, including explicit index conventions, contraction orderings, and matrix-free operational constraints that avoid explicit storage of large operator matrices. A controlled comparison shows that it is this externalized content, rather than the formal document structure, that enables reliable code generation. As a stringent benchmark, we apply this workflow to the Density-Matrix Renormalization Group (DMRG), a canonical quantum many-body algorithm requiring exact tensor-index logic, gauge consistency, and memory-aware contractions. The resulting code reproduces the critical entanglement scaling of the spin-$1/2$ Heisenberg chain and the symmetry-protected topological order of the spin-$1$ Affleck--Kennedy--Lieb--Tasaki model. Across 16 tested combinations of leading foundation models, all workflows satisfied the same physics-validation criteria, compared to a 46\% success rate for direct, unmediated implementation. The workflow reduced a development cycle typically requiring weeks of graduate-level effort to under 24 hours.
1 0
0
physics.comp-ph 2026-04-06

Humans must steer AI to write physics papers with full transcripts required

Co-Authoring with AI: How I Wrote a Physics Paper About AI, Using AI

Case study shows AI handles structure and syntax while humans enforce physical logic and academic standards.

Figure from the paper full image
abstract click to expand
The rapid integration of Large Language Models (LLMs) into scientific writing fundamentally challenges traditional definitions of authorship, responsibility, and scientific integrity. As researchers transition from using computers as deterministic tools to managing them as ``virtual collaborators,'' the nature of human contribution must be re-evaluated. Using the drafting process of a recent computational physics manuscript as a case study, this essay explores the indispensable role of the Human-in-the-Loop (HITL). We demonstrate that while AI excels at structural organization and syntax generation, the human author bears the ultimate responsibility for enforcing rigorous physical logic, maintaining academic diplomacy, and anticipating peer-review critiques. In this paradigm, the human contribution shifts from writing boilerplate text to acting as a Principal Investigator who actively mentors and steers the AI's reasoning. To ensure accountability and preserve the integrity of the scientific record in this new era, I argue that the community must mandate the publication of full, unedited AI interaction transcripts as standard supplementary material.
3 0
0
physics.comp-ph 2026-04-06 2 theorems

Finite wave runs yield Bloch bands from scattering

From Wave Scattering to Bloch Bands: A Time-Domain Approach to Band Formation in Periodic Media

Transmission spectra in layered stacks encode dispersion and gaps through phase delay and attenuation, bypassing reciprocal-space eigenvalue

Figure from the paper full image
abstract click to expand
Band formation in periodic media is a central topic in undergraduate solid-state physics, typically introduced through Bloch's theorem as an eigenvalue problem in reciprocal space for infinitely periodic systems. While mathematically elegant, this formulation can appear abstract: it assumes an idealized infinite lattice, shifts attention away from real-space wave dynamics, and presents band structures as static results rather than emergent consequences of wave propagation. Consequently, students often struggle to relate band gaps to familiar physical phenomena such as reflection, transmission, and interference, leading to a disconnect between formal band theory and observable wave behavior. We present a computational framework that addresses this gap by reconstructing band formation directly from time-domain wave propagation in finite periodic systems. Using a staggered-grid finite-difference time-domain scheme for elastic waves, a broadband excitation is propagated through a layered medium to obtain its transmission spectrum. From this, students extract the Bloch dispersion relation and observe spatial attenuation in band-gap regions, revealing the roles of multiple scattering and phase coherence. This approach provides a physically transparent pathway to band theory and enables exploration of finite-size effects, disorder, and defect-localized modes within a unified computational framework. Implemented through compact code and guided exercises, the method offers an accessible and versatile pedagogical tool, while also equipping students with transferable skills in numerical modeling of wave phenomena across disciplines.
0
0
physics.comp-ph 2026-04-06 Recognition

GRF integration smooths genetic optimization of graded lattices

Integrating Gaussian Random Functions with Genetic Algorithms for the Optimization of Functionally Graded Lattice Structures

Embedding Gaussian random functions inside the genetic algorithm keeps parameter changes continuous and cuts stress concentrations while the

Figure from the paper full image
abstract click to expand
The properties of lattice-based structures can be enhanced by varying their geometric parameters in a graded manner, and the gradation can be tailored to extremize a particular objective. In this manuscript, we propose a non-gradient-based optimization framework to find the tailor-made graded profiles for lattice-based structures. The key challenge addressed in the work is to ensure the graded nature/smoothness of the underlying structure in a non-gradient-based optimization scheme. As we demonstrate in the manuscript, the conventional implementation of the genetic algorithm provides structures with abrupt changes, leading to issues such as stress concentration. In this work, we propose a Gaussian random function (GRF)/Gaussian process regression (GPR) integrated genetic algorithm to obtain an optimal graded lattice profile for a given objective. The integration of the GRF/GPR along with a projection operator ensures the smoothness of the designs at each stage of the optimization. We present several numerical examples to demonstrate that the proposed framework provides smoother designs that are less susceptible to stress concentration, while ensuring satisfaction of the underlying objective.
0
0
physics.comp-ph 2026-04-06 2 theorems

Deep energy method predicts crack growth and resistance signature together

A multiphysics deep energy method for fourth-order phase-field fracture with piezoresistive self-sensing

Mechanics and fracture are solved first; electrical sensing follows from strain- and damage-dependent conductivity without artificial mixing

Figure from the paper full image
abstract click to expand
Self-sensing conductive composites can reveal deformation and damage through measurable changes in electrical resistance, which makes them attractive for embedded diagnostics and learning-enabled structural health monitoring. This paper presents a physically consistent multiphysics Deep Energy Method (DEM) for brittle fracture in piezoresistive materials. The mechanical part is modeled by small-strain linear elasticity coupled to a fourth-order AT2-type phase-field fracture functional with tensile/compressive energy split and history-field irreversibility. To avoid artificial energetic mixing of mechanical and electrical quantities, the electrical problem is treated as a one-way coupled sensing subproblem: after solving the mechanics--fracture problem, the electric potential is obtained from a steady conduction problem whose conductivity depends on strain through a linearized piezoresistive law and on damage through a crack-induced conductivity degradation. The resulting formulation predicts crack evolution together with its resistance signature without assigning the electrical field an artificial crack-driving role. DEM is used to minimize the variational subproblems over admissible neural trial spaces with exact imposition of essential boundary conditions. A lean verification suite is used to validate the electrical building blocks and the fracture engine separately, followed by a numerical study of a tensile plate with stress concentrators and electrodes. In that study, the framework captures a nontrivial sensing regime in which appreciable damage growth leaves the global resistance nearly unchanged, followed by a sharp resistance increase once dominant conductive ligaments are disrupted and current paths reorganize strongly.
0
0
physics.comp-ph 2026-04-06 2 theorems

PINN inverts blood flow from cuff data in minutes

Fast and Accurate Inverse Blood Flow Modeling from Minimal Cuff-Pressure Data via PINNs

Single network solves eight-artery tree with clinical correlations of 0.85 for cardiac output and 0.95 for central pressure

Figure from the paper full image
abstract click to expand
Accurate assessment of central hemodynamics is essential for diagnosis and risk stratification, yet it still relies largely on invasive measurements or on indirect reconstructions built from population-averaged transfer functions. While conventional methods are valuable in clinical practice, they face limitations, particularly in personalized medicine. Physics-informed methods address these by integrating physical principles, reducing the need for extensive data. In this work, a fully noninvasive, patient-specific framework is developed that combines a validated 1-D model of the systemic arterial tree with physics-informed neural networks (PINNs). This model performs the inverse solution of the flow and pressure fields within the arterial network, given minimal noninvasive measurements of pressure from a cuff reading and trains in 4000 iterations, at least 10x faster than the current state-of-the-art models due to several model enhancements. We validate the model predictions against our 1-D solver, yielding a near perfect correlation, and perform additional tests on a clinical dataset for the identification of important central hemodynamic parameters of cardiac output $CO$ and central systolic blood pressure $cSBP$, with correlations of $r=0.847$ and $r=0.951$, respectively. Moreover, the model is able to tune the patient-specific coefficients of the terminal resistance $R_T$ and compliance $C_T$ while training, treating them as learnable parameters. The inverse PINN model is able to solve the entire tree of 8 arteries with a single network, costing 5-10 minutes of computational time. This significant performance boost compared to traditional iterative inverse methods holds promise towards applications of personalized cardiac output monitoring and hemodynamic assessment via noninvasive approaches like wearable devices.
0

browse all of physics.comp-ph → full archive · search · sub-categories