pith. machine review for the scientific record. sign in

math.NA

Numerical Analysis

Numerical algorithms for problems in analysis and algebra, scientific computation

0
math.NA 2026-05-13 2 theorems

LS weak Galerkin method achieves optimal H2 error estimates

A Least-Squares Weak Galerkin Method for Second-Order Elliptic Equations in Non-Divergence Form

The discrete weak Hessian produces symmetric positive definite systems on general meshes with proven optimal convergence.

Figure from the paper full image
abstract click to expand
This article proposes a novel least-squares weak Galerkin (LS-WG) method for second-order elliptic equations in non-divergence form. The approach leverages a locally defined discrete weak Hessian operator constructed within the weak Galerkin framework. A key feature of the resulting algorithm is that it yields a symmetric and positive definite linear system while remaining applicable to general polygonal and polyhedral meshes. We establish optimal-order error estimates for the approximation in a discrete $H^2$-equivalent norm. Finally, comprehensive numerical experiments are presented to validate the theoretical analysis and demonstrate the efficiency and robustness of the method.
0
0
math.NA 2026-05-13 Recognition

Complex coupling reconstructs unknown Robin boundaries

Cavity shape reconstruction with a homogeneous Robin condition via a constrained coupled complex boundary method with ADMM

Minimizing the imaginary part of the coupled solution with ADMM constraints recovers cavity shapes from partial Cauchy measurements.

Figure from the paper full image
abstract click to expand
We revisit the problem of identifying an unknown portion of a boundary subject to a Robin condition based on a pair of Cauchy data on the accessible part of the boundary. It is known that a single measurement may correspond to infinitely many admissible domains. Nonetheless, numerical strategies based on shape optimization have been shown to yield reasonable reconstructions of the unknown boundary. In this study, we propose a new application of the coupled complex boundary method to address this class of inverse boundary identification problems. The overdetermined problem is reformulated as a complex boundary value problem with a complex Robin condition that couples the Cauchy data on the accessible boundary. The reconstruction is achieved by minimizing a cost functional constructed from the imaginary part of the complex-valued solution. To improve stability with respect to noisy data and initialization, we augment the formulation with inequality constraints through prior admissible bounds on the state, leading to a constrained shape optimization problem. The shape derivative of the complex state and the corresponding shape gradient of the cost functional are derived, and the resulting problem is solved using an alternating direction method of multipliers (ADMM) framework. The proposed approach is implemented using the finite element method and validated through various numerical experiments.
0
0
math.NA 2026-05-13 2 theorems

Stochastic line-search method speeds 3D CT reconstruction

A Line--Search--Based Stochastic Gradient Method for 3D Computed Tomography

Mini-batches of full 2D projections cut early iteration time while preserving acquisition geometry and needing no training data.

Figure from the paper full image
abstract click to expand
We introduce FB-LISA, a forward-backward (FB) generalization of a recently proposed line-search-based stochastic gradient algorithm to address the imaging problem of volumetric reconstruction in Computed Tomography, a substantially high demanding problem, which involves orders of magnitude of data, a high computational burden for forward and backprojection, and memory requirements that push current GPU architectures to their limits. Our formulation employs stochastic mini-batches composed of full 2D projections, preserving the physical structure of the acquisition process while enabling significant speed-ups during early iterations. The resulting method demonstrates how concepts traditionally associated with deep learning can be repurposed to accelerate large-scale inverse problems, without relying on training data or learned priors.
0
0
math.NA 2026-05-13 2 theorems

Mass lumping produces intrinsic fractional box discretization

Finite element and box-method discretizations for fractional elliptic problems with quadrature and mass lumping

A unified admissible inner product framework recovers the fractional box method from finite elements and supplies explicit error bounds that

Figure from the paper full image
abstract click to expand
We analyze numerical approximation of the fractional elliptic problem $L^{\beta}u=f$, ${\beta>0}$, where $L$ is a second-order self-adjoint elliptic operator with homogeneous Dirichlet or Neumann boundary conditions. The paper develops a unified conforming piecewise linear framework that covers both the standard finite element discretization and the box-method discretization of fractional powers. The key point is that the discrete fractional operator is defined with respect to an admissible inner product on the trial space. This includes, in particular, the standard $L^{2}$ inner product and the quadrature-based mass-lumped inner product, and we also identify a broader family of admissible inner products interpolating between these two realizations. Within this framework, we show that the mass-lumped choice yields the intrinsic fractional box discretization, namely the one obtained by taking fractional powers of the nonfractional box solution operator. For both the finite element and box-method realizations, we establish error estimates under natural consistency assumptions, making explicit the effect of load quadrature in the box case. The analysis applies directly to practical schemes and is supported by numerical experiments in one and two space dimensions.
0
0
math.NA 2026-05-13 Recognition

Semismooth* Newton method speeds up TV regularization for large tomography problems

Efficient TV regularization of large-scale linear inverse problems via the SCD semismooth* Newton method with applications in tomography

The tailored SCD solver achieves locally superlinear convergence on nonsmooth TV penalties without smoothing approximations.

Figure from the paper full image
abstract click to expand
In this paper, we consider the efficient numerical minimization of Tikhonov functionals resulting from total-variation (TV) regularization of linear inverse problems. Since the TV penalty is non-smooth, this is typically done either via smooth approximations, which are inexact, or using non-smooth optimization techniques, which can often be numerically expensive, in particular for large-scale problems. Here, we present a numerically efficient minimization approach based on the recently proposed semismooth* Newton method, which employs a novel concept of graphical derivatives and exhibits locally superlinear convergence. The proposed approach is specifically tailored to TV regularization, suitable for large-scale inverse problems, and supported by strong mathematical convergence guarantees. Furthermore, we demonstrate its performance on two (large-scale) tomographic imaging problems and compare our results to those obtained via other state-of-the-art TV regularization approaches.
0
0
math.NA 2026-05-13 2 theorems

New algorithm avoids invalid material mixtures during optimization

The SiMPL Method for Multi-Material Topology Optimization

A geometry-matched penalty on each update keeps one material per point by construction, then a small dual step meets global limits.

Figure from the paper full image
abstract click to expand
We introduce an efficient and scalable method for density-based multi-material topology optimization, integrating classical mirror descent techniques with point-wise polytopal design constraints. Such constraints arise naturally in this class of problems, wherein the vertices of convex polytopes correspond to distinct design states, only one of which should be occupied at each point in space. The framework generates a descending sequence of iterates by penalizing the design space around the previous iterate with a generalized distance function tailored to the convex geometry of the $n$-dimensional polytope. This distance function, called a Bregman divergence, smooths the optimization landscape, ensuring that each iterate strictly satisfies the point-wise constraints. Subsequently, global constraints (e.g., bounds on the structural mass) can be enforced easily by solving a small, finite-dimensional dual problem. The resulting method is simple to implement and demonstrates robustness and efficiency when combined with an Armijo-type line search algorithm. We validate the method in structural design problems involving the optimal arrangement of both isotropic and anisotropic materials, as well as magnetic flux optimization in electric motors.
0
0
math.NA 2026-05-13 Recognition

Optimized two-step propagator cuts parareal convergence factor to 0.0064

Optimized Two-Step Coarse Propagators in Parareal Algorithms

Error-bound design yields faster iterations for parabolic PDE solvers at moderate cost.

abstract click to expand
In this work, we propose a novel framework for accelerating the parareal algorithm, in which the coarse propagator is formulated as a two-step method and optimized with respect to the convergence factor.} We derive a rigorous error estimate for the proposed two-step parareal algorithm, yielding an explicit bound on the linear convergence factor. This estimate is not only of theoretical interest: it provides a quantitative guideline for selecting and designing coarse propagators. Guided by this estimate, we {consider the linear parabolic equation as an illustrative example and }construct an optimized two-step coarse propagator~(O2CP) that delivers very fast convergence in practice. The resulting method attains an optimized convergence factor of approximately $0.0064$, substantially smaller than that of commonly used practical coarse propagators in the classical parareal setting, while keeping the computational cost moderate. Numerical experiments on linear and nonlinear parabolic equations fully support the theoretical analysis and demonstrate rapid convergence of the two-step parareal algorithm equipped with the O2CP.
0
0
math.NA 2026-05-13 2 theorems

Bisection collapses all triangles to right triangles in one step

Dynamics of the Longest-Edge Altitude Bisection Algorithm

Longest-edge altitude refinement maps any shape onto a fixed right-triangle geodesic and bounds the shrinkage of mesh size.

Figure from the paper full image
abstract click to expand
We study a longest-edge based refinement scheme for triangulations, termed the longest-edge altitude bisection (LEAB), in which each triangle is subdivided by dropping the altitude from the vertex opposite to its longest edge. Using the normalized shape space of triangles introduced by Perdomo and Plaza in: Properties of triangulations obtained by the longest-edge bisection. \emph{Cent. Eur. J. Math.}, 12(12) (2014), 1796-1810, we show that LEAB admits a simple geometric description: the normalized left and right children of a triangle in focus are obtained by intersecting the geodesic of right triangles with rays issued from the endpoints of the longest edge and explicit formulas for the mappings are derived. This characterization implies an interesting observation that the associated refinement dynamics collapse the entire shape space onto the right-triangle geodesic in a single step and that every point on this geodesic is fixed. Two-sided bounds for the contraction of the mesh size (discretization parameter) are derived. Also, applications and limitations of the method are briefly discussed.
0
0
math.NA 2026-05-13 2 theorems

Dual-SAV scheme reduces constrained curve flows to linear steps plus tiny nonlinear solve

A stabilized dual-SAV parametric finite element framework for constrained planar geometric flows with mesh regularization

Projected mesh energy and block reduction keep nonlinear part at size K+1 while enforcing dissipation and exact constraints

Figure from the paper full image
abstract click to expand
Parametric finite element discretizations of constrained geometric flows must simultaneously address high-order geometric stiffness, mesh degeneration, and nonlinear global constraints. This paper develops a stabilized dual-SAV (scalar auxiliary variable) parametric finite element framework for planar closed curves. The proposed formulation introduces separate auxiliary variables for the physical geometric energy and for an artificial mesh regularization energy. The mesh regularization is coupled only to tangential motion by projecting out its normal variation, so that mesh redistribution changes the parametrization without introducing an artificial normal driving force. Based on this dual-energy structure, we construct a semi-implicit frozen-metric scheme with zero-order stabilization. The scheme leads to linear spatial response problems and satisfies discrete dissipation estimates for the modified geometric and mesh SAV energies. Nonlinear global constraints are handled by an algebraic block reduction: after solving a small number of symmetric positive-definite response problems, the remaining nonlinear system involves only the geometric auxiliary variable and the Lagrange multipliers. For $K$ global constraints, this reduced nonlinear system has dimension $K+1$; in particular, simultaneous area and length constraints lead to a three-dimensional nonlinear system, independently of the number of mesh vertices. Numerical experiments for curve shortening, area-preserving curve shortening, curve diffusion, and Helfrich-type flows illustrate the modified-energy dissipation, the enforcement of geometric constraints, and the improvement of mesh quality for both second- and fourth-order examples.
0
0
math.NA 2026-05-13 Recognition

Crank-Nicolson FEM conserves energy exactly for KGZ equations

A novel energy-conservation Crank-Nicolson finite element method for generalized Klein-Gordon-Zakharov equations

Bilinear elements paired with CN time stepping yield exact discrete energy conservation and H1 superconvergence under weakened regularity.

abstract click to expand
This article focuses on an energy-conservation Galerkin finite element method (FEM) for the generalized Klein-Gordon-Zakharov (KGZ) equations. This method combines the bilinear finite element method for spatial discretization with the Crank-Nicolson (CN) scheme for temporal discretization, thereby guaranteeing exact conservation of the discrete energy functional. A rigorous theoretical analysis is devoted to deriving error bounds for the fast-time-scale electronic field $u$ and the ion density deviation $\varphi$. By systematically integrating interpolation estimates, Ritz projection, and a postprocessing technique, the superclose error estimates and global superconvergence are established for $u$ in the $H^1$-norm, even under weakened regularity assumptions on the exact solution. Concurrently, we prove $H^1$-norm superconvergence for the auxiliary variable $\phi$ ($-\Delta\phi = \varphi_t$) and optimal-order $L^2$-norm error estimates for the auxiliary variable $p$ ($p=u_t$) and $\varphi$. Numerical examples are provided to confirm theoretical results.
0
0
math.NA 2026-05-13 Recognition

Splitting scheme converges for wave maps at low regularity

A splitting scheme for the wave maps equation at low regularity

Filtered Lie splitting preserves null structure and converges in discrete Bourgain spaces for all subcritical data in three dimensions.

Figure from the paper full image
abstract click to expand
We prove convergence of a filtered Lie splitting scheme for the wave maps equation with low regularity initial data in dimension 3. The convergence analysis is performed in discrete Bourgain spaces, as has proved fruitful for the low regularity analysis of the equation in the continuous setting. An important difficulty here is that the analysis of wave maps at low regularity requires the use of the null structure of the system, this structure thus has to be preserved at the discrete level to get an effective stable low regularity scheme. Since the null structure involves time derivatives, the scheme has to be designed carefully. The presence of time derivatives in the nonlinearity then constitutes the most significant source of numerical error. Nonetheless, we are able to prove convergence of the scheme for all subcritical initial data in $H^s$, $s>d/2$.
0
0
math.NA 2026-05-13 Recognition

IPRM removes Gibbs ringing from fractional Fourier series

Resolving the Gibbs Phenomenon in Fractional Fourier Series via Inverse Polynomial Reconstruction

The reconstruction matrix stays well-conditioned for any rotation angle and delivers exponential accuracy for analytic functions.

Figure from the paper full image
abstract click to expand
The fractional Fourier series generalizes the classical Fourier series by introducing a rotation angle $\alpha$ in the time-frequency plane, but inherits the Gibbs phenomenon for piecewise smooth functions. Unlike the classical setting, the chirp modulation factor renders the fractional partial sum complex-valued, corrupting both real and imaginary components simultaneously and making direct adaptation of classical remedies insufficient. The Inverse Polynomial Reconstruction Method (IPRM) resolves the Gibbs phenomenon by enforcing that the Fourier coefficients of a Gegenbauer polynomial expansion match the given spectral data, rather than projecting the corrupted partial sum onto a polynomial basis. This paper extends the IPRM to fractional Fourier series for the first time. The fractional transformation matrix is derived and its conditioning is shown to be governed by an $\alpha$-independent Gram matrix, which reveals the dependence on the Gegenbauer parameter $\lambda$ and the polynomial degree $m$, while being entirely insensitive to the transform angle. An $L^{\infty}$ error estimate is established, guaranteeing exponential convergence for analytic functions. Numerical experiments on piecewise analytic test functions demonstrate complete elimination of the Gibbs phenomenon and confirm the theoretical predictions.
0
0
math.NA 2026-05-13 Recognition

Contraction map on Legendre-reduced system recovers nonlinear Schrödinger initial data

Inverse initial data for nonlinear Schr\"odinger equation via Carleman estimates and the contraction principle

Truncating time with weighted Legendre polynomials turns the inverse problem into an elliptic system whose unique solution is stable under 2

Figure from the paper full image
abstract click to expand
We study an inverse initial-data problem for a nonlinear Schr\"odinger equation in which the initial wave field is reconstructed from lateral measurements. Our approach combines a Legendre-polynomial-exponential-time dimensional reduction with a Carleman-based contraction principle. First, we expand the solution in a weighted Legendre basis in time and truncate the expansion to obtain a coupled nonlinear elliptic system for the spatial coefficients. Next, we solve this reduced system by constructing a contraction map on a suitable admissible set. This contraction map admits a unique fixed point, which is the limit of the corresponding Picard iteration. We also establish a stability estimate showing that this fixed point remains close to the exact reduced solution in the noisy-data case. Finally, we present numerical experiments in two space dimensions for several different geometries and nonlinear exponents. The numerical results show that the proposed method accurately reconstructs the main features of the initial wave field and remains stable even when the boundary data contain noise.
0
0
math.NA 2026-05-12 2 theorems

Algorithm guarantees energy decrease for ferronematic states

An energy-decreasing algorithm for the finite element approximation of ferronematic equilibrium states

Decomposition-coordination with Uzawa iterations yields a strictly descending sequence to the discrete minimizer on triangular meshes.

Figure from the paper full image
abstract click to expand
We develop an energy-decreasing algorithm for the finite element approximation of two-dimensional ferronematic equilibrium states. The problem is formulated as the minimization of the harmonic energy of two two-dimensional vector fields, both with prescribed length, together with an additional nonlinear relation on the orientation of the two vectors. The finite element setting is based on piecewise continuous finite elements on a weakly acute triangulation. The computational realization of the energy-decreasing algorithm employs a decomposition-coordination framework and a Uzawa-like iteration. Numerical experiments are presented to illustrate the computational performances of the algorithm.
0
0
math.NA 2026-05-12 2 theorems

MHD projection sliced to 1D Brent minimization

Efficient Admissible Set Projection in Optimization-based Invariant-Domain-Preserving Limiters for Ideal MHD

Magnetic energy parameterization makes optimization-based admissibility enforcement practical for high-order schemes

abstract click to expand
Preserving the admissible set of the ideal magnetohydrodynamics (MHD) equations is important not only for producing physically meaningful numerical solutions, but more importantly for achieving robust computations. In this paper, we develop an optimization-based limiter to enforce admissibility while preserving global conservation and accuracy. For an easy and efficient projection, we decompose the admissible set into slices parameterized by the magnetic energy, so that the MHD projection reduces to a one-dimensional minimization, which can be solved efficiently by the Brent method. The splitting method can be used to efficiently solve the global minimization problem of the optimization-based limiter, which can be used to enforce cell average admissibility in discontinuous Galerkin (DG) schemes, and pointwise admissibility can be further enforced by the Zhang-Shu positivity-preserving limiter. We apply the limiter to high-order DG schemes and present numerical results for a few representative MHD problems.
0
0
math.NA 2026-05-12 Recognition

Separable estimators tighten relaxations beyond McCormick

Relaxation via Separable Estimators: Arithmetic and Implementation

Superposition relaxations for factorable functions outperform McCormick in tightness tests including neural networks, though at higher cost.

abstract click to expand
This article presents an arithmetic, called superposition relaxation, for bracketing the graph of a multivariate factorable function on a compact domain between a pair of underestimating and overestimating functions that are both separable. Propagation rules are established for affine and nonlinear composition operations, with a focus on exploiting global monotonicity and convexity properties in the composition. The local convergence properties of this arithmetic are also analyzed in both the pointwise and Hausdorff sense, including conditions under which quadratic pointwise convergence propagates through composition. Parameterizations of the univariate summands in a superposition relaxation either as piecewise-constant or continuous piecewise-linear functions are discussed for a practical implementation. It is shown through numerical case studies that superposition relaxations can be consistently tighter than McCormick relaxations, including for the relaxation of artificial neural networks. But superposition relaxations also incur a higher computational cost than McCormick relaxations. Further investigations are thus warranted as applications in global optimization seek to balance a relaxation's tightness with its computational cost.
0
0
math.NA 2026-05-12 Recognition

Coupled CO2-temperature model predicts building dynamics from sparse data

Data-driven moving-window Bayesian inference for transient CO2-temperature network models of buildings

Moving-window Bayesian inference estimates airflow and occupancy trajectories to deliver forecasts with credible intervals and mismatchdiagn

Figure from the paper full image
abstract click to expand
In this work, we proposes a CO2-temperature network model that links multi-zone mass transport and thermal dynamics through shared latent drivers, airflow and occupancy. The thermal component is formulated as a resistance-capacitance (RC) network augmented with airflow-driven convective exchange, while the CO2 component is governed by inter-zonal convective transport. To calibrate the model and track time-varying operating conditions based on sparse sensing, we introduce a moving-window Bayesian inference procedure that jointly estimates thermal parameters, airflow and occupancy trajectories. The estimation also provides room-specific sensor noise levels, yielding posterior predictive forecasts with credible intervals. The framework is assessed using a controlled synthetic benchmark, and a scaled physical validation experiment using CO2 and temperature sensing. In both settings, the posterior accurately reconstructs trajectories within windows and delivers low forecast errors. When inference windows overlap abrupt regime transitions, the widened uncertainty bands and increased inferred noise levels provide an interpretable diagnostic of model-data mismatch, followed by rapid recovery once the new regime is observed. Overall, coupling CO2-informed airflow with thermal dynamics yields a robust approach for conductive and advective temperature prediction, supporting practical monitoring and energy-performance assessment under limited sensing.
0
0
math.NA 2026-05-12 2 theorems

Three method classes for MD transport coefficients each carry error bounds

Mathematical analysis and numerical methods for the computation of transport coefficients in molecular dynamics

Nonequilibrium driving, equilibrium correlations, and transient relaxation are analyzed so estimators come with quantifiable numerical error

Figure from the paper full image
abstract click to expand
We review various numerical approaches to compute transport coefficients in molecular dynamics. These approaches can be broadly classified into three groups: (i) nonequilibrium methods based on applying an external driving field to the system, measuring the average response in the system, and evaluating the related linear response coefficient; (ii) approaches reformulating the transport coefficient of interest through a time correlation function for the equilibrium dynamics (the most popular instances being Green--Kubo and Einstein formulas); (iii) transient techniques, where the transport coefficient can be computed by monitoring the return to the steady state of a dynamics perturbed off its stationary distribution. For all three classes of methods, we provide elements of numerical analysis, allowing to estimate or at least quantify the level of numerical errors in the estimator of the transport coefficient; and also briefly present recent attempts to more efficiently compute transport coefficients with variance reduction approaches such as control variates, importance sampling and coupling methods. The computation of transport coefficients remains nonetheless challenging and will continue requiring research efforts in the foreseeable future.
0
0
math.NA 2026-05-12 2 theorems

Perturbation correction converts Stefan optimization to convex and lifts accuracy 2-6x

PCELM: Perturbation-Correction Extreme Learning Machine for the Stefan problem

PCELM first solves a nonconvex residual then linearizes a correction subproblem, delivering large accuracy gains for multi-phase moving-bond

Figure from the paper full image
abstract click to expand
For Stefan problems, characterized by moving boundaries and discontinuous coefficients due to phase changes, the inherent nonconvexity of the objective functional frequently causes optimization difficulty in randomized neural network approximations; to address this, we propose a Perturbation-Correction Extreme Learning Machine (PCELM) framework, built upon the extreme learning machine framework. This method first establishes a basic approximation during an initialization step by minimizing the original nonconvex residual, typically achieving only moderate accuracy, and then, in a subsequent correction step, determines a correction term by solving a subproblem based on a perturbation expansion around this basic approximation, thereby transforming it into a convex optimization problem for the output coefficients that ensures rapid convergence. We further provide a rigorous a convexity analysis, demonstrating that PCELM method solves a convex sub-problem. Numerical experiments on various Stefan problems, including multi-phase and multi-dimensional Stefan problems, confirm that the proposed PCELM method successfully overcomes optimization plateaus, with the correction step consistently delivering a significant improvement of 2-6 orders of magnitude in the relative L2 accuracy.
0
0
math.NA 2026-05-12 2 theorems

hp-FEM discretizes elastoplasticity with mixed variational forms

hp-Finite Elements for Elastoplasticity

Two equivalent weak formulations allow efficient semismooth Newton solves and come with error analyses for accurate simulations.

abstract click to expand
This article considers a model problem of elastoplasticity with linearly kinematic hardening and presents hp-finite element discretizations of two equivalent weak formulations each having their respective advantages. A mixed variational formulation is introduced to resolve the non-differentiablility of the so-called plasticity functional appearing in the weak formulation of the model problem as a variational inequality of the second kind. The discretization of the mixed formulation is then represented as a system of decoupled nonlinear equations which allows the application of an efficient semismooth Newton solver. Finally, an a priori and a posteriori error analysis is given.
0
0
math.NA 2026-05-12 Recognition

O(M) algorithm computes Helmholtz Fourier modes independently of k

Fast Evaluation of the Azimuthal Fourier Modes of the 3D Helmholtz Green's Function and Their Derivatives

High relative accuracy holds even for exponentially small modes, making dense linear algebra the main cost in axisymmetric scattering codes.

Figure from the paper full image
abstract click to expand
We introduce an $O(M)$ algorithm for evaluating the azimuthal Fourier modes $G_{k,m}$, $m = 0, 1, ..., M$, of the three-dimensional Helmholtz Green's function with real wavenumber $k$, together with all their first- and second-order derivatives with respect to the cylindrical source and target coordinates. The cost is independent of both the wavenumber and the source-target separation, and high relative accuracy is retained even for modes whose magnitude is exponentially small. The method combines contour deformation at a few boundary modes with a boundary-value formulation of the five-term recurrence in the mode index. Derivative quantities are obtained from stable recurrences, adding only a small constant factor to the cost of $G_{k,m}$ alone. Numerical experiments demonstrate high relative accuracy, linear scaling in $M$, and applications to modal boundary integral equation solvers for axisymmetric acoustic scattering, where the $k$-independent kernel evaluator makes dense per-mode linear algebra the dominant cost.
0
0
math.NA 2026-05-12 2 theorems

MCMC reaches O(ε²) error for time-changed SDE parameters at O(ε^{-2} log²ε) cost

Parameter Estimation for Partially Observed Time-Changed SDEs

Multilevel variance reduction on the MCMC sampler delivers the stated scaling for Bayesian estimates from discrete observations.

Figure from the paper full image
abstract click to expand
In this paper we consider the parameter estimation problem associated to partially-observed time changed SDEs, with observations that are given at discrete times. In particular we consider both likelihood and Bayesian estimation. We develop new Markov chain Monte Carlo (MCMC) algorithms which allow an unbiased score-based stochastic approximation method to provide likelihood-type parameter estimators. We also use a variant of this MCMC algorithm to perform multilevel-based Bayesian parameter estimation. We prove that this latter method achieves a mean square error of $\mathcal{O}(\epsilon^2)$ ($\epsilon>0$) with a cost of $\mathcal{O}(\epsilon^{-2}\log(\epsilon)^2)$. Our methodologies are tested numerically on both simulated and real data.
0
0
math.NA 2026-05-11 2 theorems

Reduced elliptic interface error depends only on interface data accuracy

Interface Reduction for Elliptic Interface Problems with Conservative Flux Reconstruction

Once interface data is accurate, the full solution recovers to roundoff, showing complexity concentrates at the interface.

Figure from the paper full image
abstract click to expand
We propose a low-dimensional interface reduction method for elliptic interface problems based on conservative flux reconstruction. The approach combines a fitted $P_1$ finite element discretization with a flux recovery procedure following \cite{ChouTang2000}, yielding locally conservative fluxes that satisfy interface conditions to machine precision. A central result shows that the error of the reduced solution is controlled entirely by the approximation error of the interface data. Numerical experiments for both continuous and discontinuous interface conditions confirm that once the interface data is accurately represented, the full solution is recovered to roundoff accuracy. These results indicate that the essential complexity of elliptic interface problems is concentrated on the interface.
0
0
math.NA 2026-05-11 2 theorems

Fast sketching accelerates power method for low-rank approximations

Accelerating Power Method with Fast Sketching for Stronger Low-Rank Approximation

Regularized spectral approximation yields provably efficient SVD, factorization, and Nyström methods with strong benchmark results.

Figure from the paper full image
abstract click to expand
The power method is one of the most fundamental tools for extracting top principal components from data through low-rank matrix approximation. Yet, when the target rank is large, the cost of matrix multiplication associated with this procedure becomes a major bottleneck. We develop an algorithmic and theoretical framework for accelerating the power method using fast sketching, which is a popular paradigm in randomized linear algebra. Our framework leads to simple and provably efficient methods for singular value decomposition, low-rank factorization, and Nystr\"om approximation, which attain strong numerical performance on benchmark problems. The key novelty in our analysis is the use of regularized spectral approximation, a property of fast sketching methods which proves more flexible in generalizing power method guarantees than traditional arguments.
0
0
math.NA 2026-05-11 2 theorems

Neural networks enrich FEM to reduce degrees of freedom for oscillations

Neural enrichment finite element method: A hybrid framework for problems with strong oscillations or interface problems

The approach uses minimal prior knowledge and proves optimal convergence for interface problems without extra regularity assumptions.

Figure from the paper full image
abstract click to expand
We propose a hybrid method, the Neural Enrichment Finite Element Method (NEFEM), designed for problems involving strong oscillations or interface problems with weak discontinuities. This method is based on the stable generalized finite element method (SGFEM) framework, wherein neural networks (NNs) are introduced as enrichment functions for adaptivity, and the Ritz functional is applied for the training process. This works makes two main contributions. First, the method constructs local subspaces with superior approximation properties, significantly reducing the required number of degrees of freedom (DoFs). Second, minimal \emph{a priori} knowledge is required to define enrichment functions, as the NNs evolve heuristically during training. Furthermore, for smooth problems, we provide a residual-based error estimator and prove both its reliability and efficiency. For interface problems, a theoretical analysis on the optimal convergence of the SGFEM is studied, notably without imposing additional regularity assumptions. These analytic results guide the network architecture design and training strategies. The performance and effectiveness of the proposed method is validated through several numerical experiments.
0
0
math.NA 2026-05-11 Recognition

Active Flux scheme reformulated for stronger dissipation

On Enhancing the Dissipative Behavior of Active Flux Advection Schemes

Added parameters improve dissipation in third-order advection while preserving accuracy and stability

Figure from the paper full image
abstract click to expand
In this work, the traditional third-order Active Flux advection scheme is modified by reformulating the method and introducing additional parameters. The effect of these parameters is studied, leading to schemes with improved dissipative properties. These improvements are validated by numerical experiments.
0
0
math.NA 2026-05-11 1 theorem

Kernel ridge regression learns PDE solution operators without paired data

Kernel Learning of PDE Solution Operators

Embedding the PDE operator into the kernel yields a closed-form estimator with proven convergence and the ability to extrapolate beyond seen

abstract click to expand
A kernel-based approach for the learning of the solution operator of general nonhomogeneous partial differential equations (PDEs) is proposed. The method incorporates physical priors, typically encoded through the PDE operator, into a kernel ridge regression framework, and employs a regularization-based formulation to construct an operator learner. This yields a closed-form estimator that is independent of the input functions that determine the underlying PDE. From the perspective of regularization theory, the resulting estimator induces a well-defined operator that links input and output spaces, which contain the functions that define a Dirichlet problem and its solution, respectively. Consequently, it effectively shifts from a PDE solver to an operator-based solver. In contrast to standard supervised learning methods, it does not rely on paired input--output training data and enables systematic extrapolation beyond observed regimes. A full error analysis is conducted, providing convergence rates for the operator-based solver under suitable choices of regularization parameters. Extensive numerical experiments, including Darcy flow and Helmholtz equations, demonstrate that the proposed method achieves high accuracy and efficiency across a range of problem settings, and compares favorably with operator learning approaches in both approximation quality and computational cost.
0
0
math.NA 2026-05-11 Recognition

Semi-implicit FEM keeps saturations between 0 and 1 for Richards equation

Discrete positivity and maximum principles for a finite element discretization of the Richards equation

Local Péclet and row-sum conditions on acute meshes with lumping prevent negative values even in dry, degenerate cases.

abstract click to expand
Standard finite element discretizations of the Richards equation may violate the discrete minimum principle, producing unphysical negative saturations. While existing bound-preserving methods typically rely on computationally expensive fully implicit solvers, we propose a novel semi-implicit finite element framework utilizing a bounded continuous auxiliary variable. Our approach treats the gravity-driven advective term using a linearly implicit technique, which improves the time-step restrictions required by explicit gravity methods near the degenerate limit. We provide rigorous mathematical proofs establishing sufficient geometric and algebraic constraints for discrete positivity and the discrete maximum principle, specifically a local P\'eclet condition and a discrete row-sum condition. When both conditions are satisfied on weakly acute meshes with mass lumping, our framework ensures that numerical solutions strictly respect physical bounds across highly degenerate conditions and initially dry soil regimes. Comprehensive numerical validation demonstrates the method across multiple flow regimes, including cases where algebraic conditions are satisfied, violated, and recovered through mesh refinement.
0
0
math.NA 2026-05-11 2 theorems

Boundary integral method models waves past moving objects in varied media

A boundary integral method for wave scattering in a heterogeneous medium with a moving obstacle

Equations stay on the obstacle surface by using rays to track distorted arrival times, stable up to Mach 0.9 in tests.

Figure from the paper full image
abstract click to expand
We propose a time-domain boundary integral method to model linear wave propagation with refractive, focusing, and Doppler effects arising from medium heterogeneities and moving obstacles. In contrast to existing techniques, our method avoids volumetric discretization and yields a formulation posed only on the boundary of the obstacle. We combine two classical ingredients: a geometric--optics parametrix to capture leading-order behavior at propagating wavefronts, and a ray-based characterization of the distorted causal geometry. The former provides a framework for defining layer potentials and deriving the associated boundary integral equations, while the latter enables a pure boundary-only formulation. Taken together, these ingredients extend existing numerical techniques for the homogeneous, fixed-boundary case to the heterogeneous, moving-boundary setting, with appropriate modifications to capture the discrete causal structure arising from the intersection of distorted light cones with the worldsheet of the moving boundary. Numerical experiments demonstrate the ability of the method to resolve Doppler effects from moving obstacles, including a rotating turbine configuration, with stable performance up to Mach 0.9. In heterogeneous media, the method captures strong refractive effects from spherical inclusions: wave propagation wrapping around a gas bubble in water, and defocusing from a hot fireball rising through a stratified atmosphere.
0
0
math.NA 2026-05-11 2 theorems

Boundary integrals model waves around moving obstacles in varying media

A boundary integral method for wave scattering in a heterogeneous medium with a moving obstacle

Geometric optics and ray-based causal geometry yield a boundary-only formulation that captures refraction and Doppler effects stably up to 0

Figure from the paper full image
abstract click to expand
We propose a time-domain boundary integral method to model linear wave propagation with refractive, focusing, and Doppler effects arising from medium heterogeneities and moving obstacles. In contrast to existing techniques, our method avoids volumetric discretization and yields a formulation posed only on the boundary of the obstacle. We combine two classical ingredients: a geometric--optics parametrix to capture leading-order behavior at propagating wavefronts, and a ray-based characterization of the distorted causal geometry. The former provides a framework for defining layer potentials and deriving the associated boundary integral equations, while the latter enables a pure boundary-only formulation. Taken together, these ingredients extend existing numerical techniques for the homogeneous, fixed-boundary case to the heterogeneous, moving-boundary setting, with appropriate modifications to capture the discrete causal structure arising from the intersection of distorted light cones with the worldsheet of the moving boundary. Numerical experiments demonstrate the ability of the method to resolve Doppler effects from moving obstacles, including a rotating turbine configuration, with stable performance up to Mach 0.9. In heterogeneous media, the method captures strong refractive effects from spherical inclusions: wave propagation wrapping around a gas bubble in water, and defocusing from a hot fireball rising through a stratified atmosphere.
0
0
math.NA 2026-05-11 2 theorems

Patchwise Fourier extensions achieve linear cost on curved domains

A Patchwise Local Fourier Extension Method for Function Approximation on General Two-Dimensional Domains

Fixed-size local arrays and reused SVD matrices keep online work linear in output points for smooth functions on general 2D domains.

Figure from the paper full image
abstract click to expand
We propose a patchwise local Fourier extension method for approximating smooth functions on general two dimensional domains with curved boundaries. The domain is embedded into a Cartesian background grid and decomposed into rectangular interior patches and one-side curved trapezoidal boundary patches. After local data transfer, all patches are converted into fixed-size tensor-product arrays and approximated by a truncated-SVD stabilized local Fourier extension procedure. Unlike global Fourier frame approximations, the proposed method localizes both the geometry and the ill-conditioned extension process. For fixed local parameters, the local algebraic operations are performed on fixed-size systems, and the reference Fourier extension matrices and their singular value decompositions are reused across patches. Boundary patches require additional one-dimensional transfer or completion steps, but their costs remain uniformly bounded by the local resolution. Consequently, the online complexity is \(O(N)\), where \(N\) denotes the total number of retained output points for fixed local resolution. Numerical experiments on smooth curved domains and on a mildly rough boundary domain demonstrate that the method achieves high accuracy with a fixed set of local parameters. The smooth-cover correction reduces the boundary-induced error by several orders of magnitude in the full-domain rough-boundary test, without changing the underlying scan-based partition.
0
0
math.NA 2026-05-11 Recognition

Algebraic multiscale method converges independently of network geometry

Efficient Multiscale Methods for Highly Heterogeneous Spatial Network Models

Subgraph Poincaré constant and oversampling layers deliver O(C_po) error bounds regardless of heterogeneity contrast.

Figure from the paper full image
abstract click to expand
Modeling complex spatial networks with multiscale heterogeneity poses significant mathematical and computational challenges. Lacking explicit PDE discretizations and facing excessive degrees of freedom, conventional methods often become computationally prohibitive. To address these challenges, we propose an efficient multiscale modeling for highly heterogeneous spatial networks. We construct multiscale basis functions tailored to spatial network models with heterogeneous edge weights and node degrees. A key novelty is that the proposed method doesn't introduce geometric parameters (such as Dirichlet nodes, distances, or mesh sizes), thereby preserving its purely algebraic nature and ensuring broad applicability. By incorporating a subgraph-wise estimate, we define a Poincar\'e constant $C_{\mathrm{po}}$ that renders the method independent of the underlying graph geometry. Then through an appropriate choice of the number of graph oversampling layers, we establish an $O(C_{\mathrm{po}})$ convergence independent of the local heterogeneity contrast. Notably, our scheme operates entirely within an algebraic framework, eliminating the need for Dirichlet nodes and positive-definiteness on specific matrices arising in the model. This flexibility enables the simulation of a wider range of physical models and accommodates various boundary conditions. Rigorous theoretical analyses are provided under suitable assumptions, and extensive numerical experiments validate the effectiveness of the proposed approach.
0
0
math.NA 2026-05-11 Recognition

Variational equation governs least-squares backward error

A Variational Equation and Lower Bound for the Linear Least-Squares Backward Error

Recast as generalized eigenvalue problem, it decomposes for multiple right-hand sides and supplies a sketching lower bound comparable to the

Figure from the paper full image
abstract click to expand
This paper derives a new variational equation for the linear least-squares backward error by expressing the backward error in terms of a generalized eigenvalue problem and using results from indefinite linear algebra. For problems with multiple right-hand sides, the variational equation also shows that the backward error can be decomposed as a sum of smaller backward error problems. Applications to stopping criteria for iterative methods are considered, and a new sketching-based lower bound is proposed which is provably of quality comparable to the sketched Karlson-Wald\'{e}n estimate.
0
0
math.NA 2026-05-11 2 theorems

Gauss-Seidel NPDo updates converge for principal joint diagonalization

An NPDo Approach for Principal Joint SVD-type Block Diagonalization

Objective increases monotonically while extracting dominant shared diagonal blocks from multiple matrices.

Figure from the paper full image
abstract click to expand
This paper is concerned with partial Joint SVD-type Block Diagonalization of several matrices so that the extracted diagonal parts collectively optimally assume part of the total mass of all given matrices. For that reason, it will be referred also as Principal Joint SVD-type Block Diagonalization. When each block-size is 1-by-1, it is about finding a dominant partial joint SVD decomposition for the matrices of interests. An NPDo approach is proposed for maximizing the common dominant block-diagonal parts collectively. It is shown that the NPDo approach combined with Gauss-Seidel-type updating is globally convergent to a stationary point while the objective increases monotonically. Numerical experiments are presented to illustrate the efficiency of the NPDo approach.
0
0
math.NA 2026-05-11 2 theorems

Nonlinear jumps reduced to one scalar equation

A scalar interface reduction for nonlinear interface problems

Decomposition into continuous and unit-jump modes keeps bulk linear while solving a low-dimensional nonlinear equation for elliptic and Par

Figure from the paper full image
abstract click to expand
We study finite element approximations of elliptic and parabolic interface problems with discontinuous coefficients and nonlinear jump conditions. We introduce a scalar interface reduction in which the solution is decomposed into a continuous component and a unit-jump response mode. This representation isolates the interface nonlinearity into a single scalar variable while the bulk problem remains linear. From this perspective, the nonlinear interface condition is reduced to a scalar nonlinear equation, which may be interpreted as a nonlinear Schur complement associated with the interface degree of freedom. The resulting formulation leads to a simple computational procedure consisting of linear solves combined with a low-dimensional nonlinear update. Numerical results for representative elliptic and parabolic problems confirm second-order accuracy for interface quantities and demonstrate the effectiveness of the proposed approach.
0
0
math.NA 2026-05-11 Recognition

Local Legendre frames give stable approximation from equispaced data

Local Legendre Frame Approximation from Equispaced Data

Subinterval partitioning and TSVD regularization produce quasi-optimal accuracy for smooth and oscillatory functions.

Figure from the paper full image
abstract click to expand
We propose a local Legendre frame (LLF) method for function approximation from equispaced data on a finite interval. Motivated by the difficulty of stable high-order polynomial approximation at equispaced points, especially in the presence of the Runge phenomenon, the method partitions the interval into subintervals, maps each subinterval to a common reference interval, and computes local coefficients by a truncated singular value decomposition (TSVD) regularization. Since all subintervals share the same local sampling matrix, the method admits a natural offline--online implementation. We establish a quasi-optimal estimate for the regularized reconstruction and discuss practical parameter selection. Numerical results show that LLF attains high accuracy for relatively smooth and moderately oscillatory functions, while it remains applicable to highly oscillatory functions, although comparable accuracy generally requires more sampling points. For continuous piecewise smooth functions with derivative singularities, the method also provides an effective detect--localize--correct strategy based on one-sided coefficient-energy indicators. These results indicate that LLF provides a stable and flexible local approximation framework for equispaced data.
0
0
math.NA 2026-05-11 Recognition

Spectral scheme approximates fractional diffusion on anisotropic domains

A time dependent fractional order diffusion equation with constant diffusivity matrix

Error analysis and tests show how constant but direction-dependent diffusivity affects solutions in composite-like settings.

Figure from the paper full image
abstract click to expand
Of primary interest in this paper is the numerical approximation of a time dependent fractional, in space, diffusion equation where the domain is assumed to be nonhomogeneous, having different axial diffusion coefficients. This work is motivated from the consideration of composite material which can exhibit different material properties along, and perpendicular to, internal planar structures. Careful attention is paid to accurately capture the boundary behavior of the solution. A spectral approximation scheme is used for the spatial discretization and a backward Euler approximation used for the temporal discretization. Following an error analysis for the approximation scheme, numerical experiments are given to demonstrate the effects of the nonhomogeneous domain and to support the theoretical analysis.
0
0
math.NA 2026-05-11 Recognition

Karhunen-Loève series uses covariance eigenfunctions for optimal random field series

A Primer on the Karhunen-Lo\`eve Expansion

The expansion converges in mean square and minimizes truncation error because it diagonalizes the covariance operator.

abstract click to expand
This article provides a primer on the spectral representation of random fields via the Karhunen-Lo\`eve Expansion (KLE). The goal is to bridge the gap between the theoretical foundations of the KLE and its application in computational modeling under uncertainty. We detail how tools from operator theory and probability are combined to analyze the convergence and optimality of the KLE. We also emphasize the associated computational and mathematical modeling considerations.
0
0
math.NA 2026-05-11 Recognition

Finite measurements yield non-unique minimizers in neural variational methods

Non-Uniqueness of Solutions in Neural Variational Methods

The discrete problems become ill-posed because neural trial functions are constrained only by finitely many linear measurements, independent

abstract click to expand
Recent work has shown that strong-form physics-informed neural networks (PINNs) based on pointwise enforcement of differential operators can be ill-posed due to the combination of sufficiently expressive neural network trial spaces with finitely many measurements. In this work, we develop an abstract analytical framework that isolates this finite-information mechanism and extends its applicability beyond strong-form formulations. We apply the framework to three representative variational neural discretizations: the Deep Ritz method, neural network discretizations of variational regularization functionals, and weak PINNs. Despite their differing formulations, these methods constrain the neural trial function only through finitely many linear measurements, such as quadrature evaluations or finite-dimensional test spaces. We show that this structural feature leads to ill-posed discrete optimization problems, manifested by non-uniqueness or degeneracy of minimizers, independently of the well-posedness of the underlying continuous variational problem.
0
0
math.NA 2026-05-11 Recognition

Finite measurements cause non-unique minimizers in neural variational methods

Non-Uniqueness of Solutions in Neural Variational Methods

Expressive neural trial functions remain underdetermined by quadrature points or small test spaces, producing degeneracy regardless of the 0

abstract click to expand
Recent work has shown that strong-form physics-informed neural networks (PINNs) based on pointwise enforcement of differential operators can be ill-posed due to the combination of sufficiently expressive neural network trial spaces with finitely many measurements. In this work, we develop an abstract analytical framework that isolates this finite-information mechanism and extends its applicability beyond strong-form formulations. We apply the framework to three representative variational neural discretizations: the Deep Ritz method, neural network discretizations of variational regularization functionals, and weak PINNs. Despite their differing formulations, these methods constrain the neural trial function only through finitely many linear measurements, such as quadrature evaluations or finite-dimensional test spaces. We show that this structural feature leads to ill-posed discrete optimization problems, manifested by non-uniqueness or degeneracy of minimizers, independently of the well-posedness of the underlying continuous variational problem.
0
0
math.NA 2026-05-11 2 theorems

Localized iteration and splitting stabilize multiscale parabolic simulations

Decoupling scales via localized subspace iteration and temporal splitting for multiscale parabolic equations

The framework builds offline bases for slow modes and splits time steps to remove contrast-dependent constraints while preserving accuracy.

Figure from the paper full image
abstract click to expand
Simulating diffusion in heterogeneous media presents a significant computational challenge, as resolving microscopic physical scales traditionally demands excessively fine computational grids. To overcome this barrier, we extend the Localized Subspace Iteration (LSI) framework to multiscale parabolic equations. The proposed method constructs optimal, low-dimensional trial spaces by iteratively approximating the dominant eigenspaces of local inverse operators via Localized Standard Subspace Iteration (LSSI) or Localized Krylov Subspace Iteration (LKSI). Because these LSI basis functions are inherently tailored to capture the slow-decaying, low-frequency modes of the parabolic solution, they naturally suppress error accumulation over long-term integration. To further improve computational efficiency, we decouple the basis construction into an offline phase and implement a contrast-independent, partially explicit temporal splitting scheme for online time-stepping. By explicitly advancing the dominant macroscopic modes while implicitly treating high-frequency microscopic corrections, this scheme guarantees stability without imposing restrictive time-step constraints. We establish rigorous a priori error estimates in both the energy and $L^2$ norms. Numerical experiments illustrate the accuracy and efficiency of the LSI framework, particularly highlighting the LKSI method's advantages in handling high-contrast, complex multiscale media.
0
0
math.NA 2026-05-11 2 theorems

Contact constraints split PDE parameter recovery into unique and non-unique cases

On a PDE-based material parameter identification problem with contact constraints

In the membrane-barrier model the same measurements can arise from different coefficients when contact hides variations, yet algorithms can,

Figure from the paper full image
abstract click to expand
We consider the identification of a scalar coefficient in a PDE-based parameter estimation problem with contact constraints. The considered problem can be used as an idealized model of a membrane under forces, constrained by a barrier or indenter. More generally, it serves as a benchmark for the analysis of more complex contact problems and the development of corresponding reconstruction algorithms. In this paper, we discuss both the forward and inverse parameter estimation problems, as well as uniqueness and non-uniqueness issues caused by the contact constraints. Furthermore, we consider the design and implementation of reconstruction approaches which we test on numerical examples, illustrating both uniqueness and non-uniqueness as well as parameter identifyability.
0
0
math.NA 2026-05-11 Recognition

RQMC in walk-on-spheres beats Monte Carlo variance rates

Randomized quasi-Monte Carlo for walk on spheres

Across five examples the median variance decay was slightly faster than n to the -1.1 with reduction factors of 1.8 to 10.7 at 131072 points

Figure from the paper full image
abstract click to expand
We investigate the use of randomized quasi-Monte Carlo (RQMC) in walk on spheres algorithms to solve boundary value problems for functions with Dirichlet boundary conditions in $\mathbb{R}^d$. For harmonic functions with $d=2$, the integrands of interest are periodic indicator functions over regions $\Theta$ in the torus $\mathbb{T}^k$. We give conditions for $\partial\Theta$ to have $k-1$ dimensional Minkowski content which allows us to use results of He and Wang (2015). The RQMC estimates involve multiple values of $k$. We see sampling variances decreasing with the number $n$ of sample points at slightly better than Monte Carlo rates. The median variance rate in $4$ RQMC methods over $5$ worked examples, including some with $d=3$ and some with nonzero source functions, was slightly better than $O(n^{-1.1})$. The variance reduction factors ranged from $1.8$ to $10.7$ at $n=2^{17}$. None of the four RQMC methods dominated the others.
0
0
math.NA 2026-05-11 Recognition

Augmented Krylov subspace jointly approximates data fit and log-det for FIR kernels

Kernel-based linear system identification using augmented Krylov subspaces

Shift invariance lets the same subspace evaluate the maximum-likelihood objective for many regularization values at low extra cost.

abstract click to expand
We propose a novel Krylov subspace method for estimating the finite impulse response (FIR) of a one-dimensional linear time-invariant systems. The method approximates the system's FIR using a kernel-based formulation combined with hyperparameter selection based on maximum likelihood estimation (MLE), which requires repeated evaluation of two terms: The data fit $\boldsymbol{y}^{\top} (\lambda \boldsymbol{I} + \boldsymbol{A})^{-1} \boldsymbol{y}$ and the model complexity $\log(\det (\lambda \boldsymbol{I} + \boldsymbol{A}))$, where $\boldsymbol{A}$ is a certain positive semidefinite matrix that admits fast matrix--vector products and $\lambda > 0$ is a regularization parameter. Instead of approximating these two quantities separately, we jointly approximate them using a single augmented Krylov subspace for $\boldsymbol{A}$. One major benefit of augmentation is that we obtain accelerated convergence when approximating the data fit quadratic form, through implicit preconditioning. Thanks to the shift invariance of Krylov subspaces, the extracted approximations can be used to evaluate the MLE objective for many values of $\lambda$ at little additional cost. We derive error bounds for the approximations, reflecting the benefits of augmentation demonstrated through multiple numerical experiments.
0
0
math.NA 2026-05-11 Recognition

Convex limiter preserves invariant domains for high-order conservation schemes

Invariant domain preserving limiting of time explicit and time implicit discretizations for systems of conservation laws

The technique limits antidiffusive fluxes so the solution becomes a convex combination of low-order states known to respect all invariant

Figure from the paper full image
abstract click to expand
This work concerns the design and analysis of a limiting technique that allows the preservation of invariant domains for high-order numerical approximations of nonlinear hyperbolic systems of conservation laws. The method can be applied to any conservative discretization method in space as well as to a wide range of explicit and implicit time integration schemes. The method limits the high-order solution around a low-order accurate solution that is known to preserve all the invariant domains. It generalizes the flux-corrected transport limiter [J. P. Boris and D. L. Book, J. Comput. Phys., 11, 1973; S. T. Zalesak, J. Comput. Phys., 31, 1979] to systems of conservation laws and relies on the limitation of antidiffusive fluxes, but defines the limiting coefficients so as to express the limited solution as a convex combination of invariant domain preserving quantities similarly to the convex limiting framework [Guermond et al., Comput. Methods Appl. Mech. Engrg., 347, 2019]. We give details on the derivation of this limiting technique and provide some illustration with finite volume or discontinuous Galerkin (DG) space discretizations associated to explicit or implicit Runge-Kutta methods as well as to time DG integrations. The limiter is applied iteratively to refine the limited solution around the high-order one, while preserving the invariant domains, and a heuristic is proposed to accelerate its convergence. Numerical experiments solving one- and two-dimensional problems involving scalar hyperbolic equations and the compressible Euler equations are presented to illustrate the properties of these schemes.
0
0
math.NA 2026-05-11 2 theorems

NSPOD preconditioner cuts Krylov iterations below algebraic multigrid

NSPOD: Accelerating Krylov solvers via DeepONet-learned POD subspaces

DeepONet learns POD bases that act as a multigrid-like preconditioner for parametric solid mechanics problems on complex unstructured meshes

Figure from the paper full image
abstract click to expand
The convergence of Krylov-based linear iterative solvers applied to parametric partial differential equations (PDEs) is often highly sensitive to the domain, its discretization, the location/values of the applied Dirichlet/Neumann boundary conditions, body forces and material properties, among others. We have previously introduced hybridization of classical linear iterative solvers with neural operators for specific geometries, but they tend to not perform well on geometries not previously seen during training. We partially addressed this challenge by introducing the deep operator network Geo-DeepONet and hybridizing it with Krylov-based iterative linear solvers, which, despite learning effectively across arbitrary unstructured meshes without requiring retraining, led to only modest reductions in iterations compared to state-of-the-art preconditioners. In this study we introduce Neural Subspace Proper Orthogonal Decomposition (NSPOD), a multigrid-like deep operator network-based preconditioner which can dramatically reduce the number of iterations needed for convergence in Krylov-based linear iterative solvers, even when compared to state-of-the-art methods such as algebraic multigrid preconditioners. We demonstrate its efficiency via numerical experiments on a linearized version of solid mechanics PDEs applied to unstructured domains obtained from complex CAD geometries. We expect that the findings in this study lead to more efficient hybrid preconditioners that can match, or possibly even surpass, the convergence properties of the current gold standard preconditioning methods for solid mechanics PDEs.
0
0
math.NA 2026-05-11 2 theorems

Neural POD approximation cuts Krylov iterations below AMG

NSPOD: Accelerating Krylov solvers via DeepONet-learned POD subspaces

NSPOD learns low-rank subspaces that precondition linear solvers on complex unstructured meshes without retraining.

Figure from the paper full image
abstract click to expand
The convergence of Krylov-based linear iterative solvers applied to parametric partial differential equations (PDEs) is often highly sensitive to the domain, its discretization, the location/values of the applied Dirichlet/Neumann boundary conditions, body forces and material properties, among others. We have previously introduced hybridization of classical linear iterative solvers with neural operators for specific geometries, but they tend to not perform well on geometries not previously seen during training. We partially addressed this challenge by introducing the deep operator network Geo-DeepONet and hybridizing it with Krylov-based iterative linear solvers, which, despite learning effectively across arbitrary unstructured meshes without requiring retraining, led to only modest reductions in iterations compared to state-of-the-art preconditioners. In this study we introduce Neural Subspace Proper Orthogonal Decomposition (NSPOD), a multigrid-like deep operator network-based preconditioner which can dramatically reduce the number of iterations needed for convergence in Krylov-based linear iterative solvers, even when compared to state-of-the-art methods such as algebraic multigrid preconditioners. We demonstrate its efficiency via numerical experiments on a linearized version of solid mechanics PDEs applied to unstructured domains obtained from complex CAD geometries. We expect that the findings in this study lead to more efficient hybrid preconditioners that can match, or possibly even surpass, the convergence properties of the current gold standard preconditioning methods for solid mechanics PDEs.
0
0
math.NA 2026-05-11 2 theorems

New CG-DG schemes conserve hyperbolic laws pointwise on any volume

On structure-preserving and pointwise conservative continuous DG schemes for hyperbolic systems

Discontinuous solution spaces paired with continuous flux spaces enforce integral conservation and exact identities without mesh alignment.

abstract click to expand
We present a new class of structure-preserving semi-discrete continuous-discontinuous Galerkin (CG-DG) finite element schemes for linear and nonlinear hyperbolic systems of partial differential equations on unstructured simplex meshes that automatically satisfy the following properties: i) the new schemes are not only cellwise conservative, but also locally pointwise conservative everywhere, hence they satisfy the integral form of the conservation law on arbitrary control volumes that do not have to coincide with the mesh at all; ii) the new methods naturally satisfy the two basic vector calculus identities $\nabla \cdot \nabla \times \mathbf{A}$ and $\nabla \times \nabla Z$ exactly pointwise locally and globally everywhere on the discrete level; iii) for linear symmetric hyperbolic systems the schemes are naturally energy conservative for the square energy, i.e. nonlinearly stable in the $L^2$ norm. The key ingredient of the new CG-DG schemes is the use of two different but compatible approximation spaces: the classical DG space $\mathcal{U}_h^N$ of discontinuous piecewise polynomials of degree up to $N$ and a classical finite element space $\mathcal{W}_h^{N+1}$ of globally continuous piecewise polynomials of degree $N+1$. In the new CG-DG schemes, the discrete solution $\mathbf{u}_h$ is sought in $\mathcal{U}_h^N$, while a suitable discrete flux field $\tilde{\mathbf{f}}_h$ is computed in $\mathcal{W}_h^{N+1}$. For $N=0$ our new schemes are directly related to cell-centered finite volume schemes with suitable vertex-based fluxes. All claimed properties of the schemes are first mathematically proven and are then also verified via suitable numerical tests. We show applications of our approach to three linear and nonlinear hyperbolic systems.
0
0
math.NA 2026-05-11 2 theorems

Numerical schemes preserve 1/ε entropy scaling for conservation laws

Kolmogorov varepsilon-entropy of numerical solutions for scalar conservation laws with convex flux

Conservative monotone methods obeying discrete OSLC match exact solution compactness bounds under grid constraints.

abstract click to expand
Building on the information-theoretic perspective of P.~D.~Lax [\textit{Proc.\ Sympos., Math.\ Res.\ Center, Univ.\ Wisconsin}, 1978], we establish a two-sided quantitative compactness estimate for numerical solutions of scalar conservation laws with a uniformly convex flux, expressed in terms of Kolmogorov $\varepsilon$-entropy. We prove that, under specific grid constraints, conservative, monotone finite-difference schemes satisfying a discrete one-sided Lipschitz condition (OSLC) preserve the $1/\varepsilon$ Kolmogorov entropy scaling of the corresponding exact entropy solution set, matching the bounds obtained by De~Lellis and Golse [\textit{Comm.\ Pure Appl.\ Math.}\ \textbf{58} (2005)] and by Ancona, Glass, and Nguyen [\textit{Comm.\ Pure Appl.\ Math.}\ \textbf{65} (2012)]. Specifically, the upper bound follows from the discrete OSLC, while the lower bound relies on a uniform approximation argument on a bounded-variation precursor class. Our results show that prototypical first-order methods are high-resolution in Lax's sense. Finally, we abstract the lower bound mechanism into a general transfer principle, discuss implications for information recovery via post-processing, and indicate directions for future work.
0
0
math.NA 2026-05-11 Recognition

Spanning-tree gauge makes Newton method work for graph optimal transport

Newton's method for optimal transport problem on graphs

Reducing cycle redundancies yields a sparse solvable system and guarantees positive densities on lattices under a CFL condition.

Figure from the paper full image
abstract click to expand
In this paper, we study dynamical optimal transport on a connected graph from the perspective of the Benamou-Brenier formulation, where densities are assigned to vertices and velocities to edges. However, directly using Newton's method on the resulting nonlinear systems encounters two potential difficulties: (i) if the graph contains cycles, edge variables are not unique, and (ii) there is no guarantee that the density variables remain positive. To address these challenges, we introduce a finite-difference-type Newton method that eliminates cycle-induced redundancies through a spanning-tree gauge, resulting in a reduced set of independent variables and a well-posed, sparse linear system. For the lattice graph arising from the continuous optimal transport problem, density positivity can also be guaranteed by using an upwind discretization subject to a CFL-type condition. We further demonstrate the versatility of the proposed scheme by applying it to a range of problems, including optimal transport on lattices and random graphs, inverse optimal transport problems, and social network analysis.
0
0
math.NA 2026-05-11 1 theorem

Neural operator cuts iterations for convolution equations

Solving Convolution-type Integral Equations using Preconditioned Neural Operators

Preconditioned high-frequency training plus weighted Jacobi beats multigrid and PCG on large 1-D and 2-D ill-conditioned systems.

Figure from the paper full image
abstract click to expand
Convolution-type integral equations arise from various fields, \textit{e.g.}, finite impulse response filters in signal processing and deblurring problems in image processing. When solving these equations, conventional numerical methods, like the multigrid method, can only efficiently solve the low-frequency components in the error, but not the high-frequency components. In this paper, we apply neural operators to address this issue. By adopting a preconditioning approach, we propose a novel training strategy that trains neural operators to solve the high-frequency components efficiently. Then, we combine the neural operators with some classical iterative solvers, like the weighted Jacobi method, to obtain an efficient hybrid iterative algorithm for the integral equations. We analyze the generalization error of our training strategy and the convergence of the hybrid iterative algorithm. We test our algorithms on large-scale and ill-conditioned linear systems discretized from one- and two-dimensional convolution-type integral equations. Our proposed algorithm significantly outperforms the multigrid method and the preconditioned conjugate gradient method in both iteration numbers and computational time.
0
0
math.NA 2026-05-11 2 theorems

Eulerian scheme converges for VPBGK plasma model

Convergence of an Eulerian scheme for the Vlasov-Poisson-BGK model

Error bounds in weighted L∞ norms derived for distribution function and electric field despite grid mixing.

Figure from the paper full image
abstract click to expand
The Vlasov-Poisson-BGK (VPBGK) model is a kinetic model for describing the dynamics of collisional plasmas. Although various numerical schemes have been developed for it, a corresponding convergence theory has been absent. This paper fills this gap by presenting the first convergence analysis for a non-splitting, finite-difference Eulerian scheme discretized on the full phase-space grid. A major theoretical obstacle is the mixing of velocity indices induced by the electric field, which hinders the derivation of a uniform lower bound for the discrete solution. To overcome this stability challenge, we propose a modified lower bound estimate suitable for ionized systems that incorporates the step-wise degradation. Under a truncated velocity domain with a Neumann boundary condition, we establish error estimates for the distribution function in a weighted $L^{\infty}$ norm and for the electric field in a $L^{\infty}$ norm, respectively.
0
0
math.NA 2026-05-11 Recognition

Sparse RFNNs with sSVD tackle stiff ODEs efficiently

Sparse Random-Feature Neural Networks with Krylov-Based SVD for Singularly Perturbed ODE

Structured sparsity boosts rank and allows fast sparse SVD solving while keeping accuracy high on advection-dominated 1D problems.

Figure from the paper full image
abstract click to expand
Random-feature neural networks (RFNNs), including architectures with fixed hidden layers and analytically determined output weights, offer fast training but often suffer from issues due to dense representations of the hidden layer activation. Their reliance on dense feature mappings and least squares solvers can limit scalability and numerical stability, particularly for high-dimensional or stiff systems. Specifically, the activation matrix is observed to be low-rank and extremely ill-conditioned. In this work, we propose a sparse framework for RFNNs that integrates structured sparsity into the hidden layer activations that increases the rank and employs Sparse Singular Value Decomposition (sSVD) for solving the resulting linear least squares problem scalably and efficiently while catering to the bad condition number. We explore the theory behind Lanczos-Golub-Kahan Bidiagonalization technique for sparse SVD and conduct some experiments to identify some limitations and justify the requirement for orthogonalization step in our application. Then, we demonstrate that the proposed method maintains or improves solution accuracy for solving the benchmark one-dimensional steady convection-diffusion equations case having stronger advection, while achieving substantial gains in training efficiency and robustness compared to standard dense implementations.
0
0
math.NA 2026-05-11 Recognition

Rational functions reveal Kolmogorov terms by inspection

Variable decoupling and the Kolmogorov Superposition Theorem for rational functions

No computation needed to separate variables in the superposition for any multivariate rational function.

Figure from the paper full image
abstract click to expand
This work shows that for rational multivariate functions, the Kolmogorov Superposition Theorem (KST) involves several single-variable functions, which can be written down by inspection. In other words, no computation is required for decoupling the variables of multivariate rational functions. The key tool for this development is the Loewner Framework for multivariate functions. Applications of this result involve approximating multivariate non-rational functions by low-complexity multivariate rational and polynomial functions.
0
0
math.NA 2026-05-11 Recognition

Stochastic column-block method solves sparse nonlinear systems

On a stochastic column-block bregman method for nonlinear systems

It converges with a proven rate bound and performs efficiently on image recovery tasks.

Figure from the paper full image
abstract click to expand
Sparse solution problems play an important role in both signal processing and image restoration. In this paper, we propose a stochastic column-block nonlinear Bregman method for efficiently computing sparse solutions to nonlinear systems. Under certain assumptions, we analyze the convergence of the proposed method and derive an upper bound for its convergence rate. Numerical experiments, including an image recovery problem, are presented to illustrate the efficiency of the proposed method.
0
0
math.NA 2026-05-11 1 theorem

Proximal method enforces exact isometry at mesh cell barycenters

Proximal Galerkin for the isometry constraint

Eliminates preprocessing and yields mesh-independent convergence for nonlinear plate models.

Figure from the paper full image
abstract click to expand
We resolve a longstanding open problem in the computational modeling of nonlinear plates by introducing a numerical method that exactly enforces the isometry constraint, namely, that the first fundamental form of the mid-surface coincides with the identity tensor. Several numerical methods have been proposed to approximate solutions of such manifold-constrained variational problems using gradient flows with tangent space updates. However, this class of methods presents two main challenges. First, a preprocessing step is required to enforce the boundary conditions and generate an initial guess sufficiently close to an isometry. Second, each step of the gradient flow typically increases the isometry defect. We adopt an alternative approach based on the proximal Galerkin framework, originally introduced for variational problems with convex inequality constraints. The resulting method preserves the geometric structure of the feasible set and yields an efficient algorithm in which each iterate is an exact isometry at the barycenter of every mesh cell. In contrast to existing methods, no preprocessing step is required, enabling broader applicability of this important category of mathematical models. Numerical experiments on standard benchmarks demonstrate that the method converges to a prescribed error tolerance in an asymptotically mesh-independent number of iterations and requires substantially fewer iterations than previous methods, even on coarse meshes.
0
0
math.NA 2026-05-08 Recognition

Neural embedding slows ODE dynamics for 20x fewer steps

Accelerating the Simulation of Ordinary Differential Equations Through Physics-Preserving Neural Networks

Derived latent equations remain exact, so standard integrators advance the solution with far larger time steps while preserving accuracy.

Figure from the paper full image
abstract click to expand
Numerical simulation of ordinary differential equations (ODEs) can be challenging when the system exhibits high accelerations and rapidly changing dynamics. Under these conditions the ODE solver often needs to take very small time steps in order to resolve the solution accurately, resulting in increased computational cost. In order to accelerate the simulation of these ODEs we present a novel methodology that uses a pseudo-invertible neural network to map system states into a high-dimensional latent-space. The network is then trained so that the dynamics in this learned latent space are slow, and can be simulated with relatively few function calls. Unlike existing neural methods, the latent dynamic equations are not learned from trajectory data, but derived from the original system equations and the chain rule. This allows the method to generalize better than existing approaches because the derived equations are correct by construction. In this work, we derive latent state equations of motion for any general ODE, and describe the loss function used to enforce slow time evolution of the latent states. We then apply this technique to multiple example ODEs and show that these problems can be solved with $3$x to $20$x fewer function calls for the same accuracy when simulating in the learned latent space. This reduction in cost could decrease computational demands for scientific simulations across engineering and physics applications.
0
0
math.NA 2026-05-08 2 theorems

Fewer order conditions yield efficient symplectic integrators for cubic and quartic forces

Efficient symplectic integrators for cubic and quartic potentials

Polynomial structure reduces the conditions needed, allowing high-order methods that beat standard compositions and RKN schemes on these常见 2

Figure from the paper full image
abstract click to expand
We present a set of new, efficient high-order symplectic methods designed for Hamiltonian systems with cubic or quartic potentials. By demonstrating that polynomial potentials require fewer order conditions, we develop schemes that outperform both standard symmetric compositions of second-order methods and existing RKN splitting methods. Numerical results confirm their improved efficiency over state-of-the-art alternatives found in the literature.
0
0
math.NA 2026-05-08

Polar factor yields retraction with closed-form inverse on symplectic Stiefel

A polar-factor retraction on the symplectic Stiefel manifold with closed-form inverse

The map converts points on the manifold to tangent vectors by an explicit formula, enabling direct vector arithmetic in local coordinates.

Figure from the paper full image
abstract click to expand
In Riemannian computing applications, it is crucial to map manifold data to a Euclidean domain, where vector space arithmetic is available, and back. Classical manifold theory guarantees the existence of such mappings, called charts and parameterizations, or, collectively, local coordinates. When computational efficiency is of the essence, practitioners usually resort to retraction maps to define local coordinates. Retractions yield first-order approximations of the Riemannian normal coordinates. This work introduces a new retraction on the symplectic Stiefel manifold that features a closed-form inverse. We expose essential features and compare the numerical performance to a selection of existing retractions. To the best of our knowledge, prior to this work, the so-called Cayley retraction was the only retraction on the symplectic Stiefel manifold with known closed-form inverse.
0
0
math.NA 2026-05-08

Low-rank kernel operator learned once and reused for all option exercise dates

Low-rank kernel methods for American option pricing

Reformulating continuation-value estimation as a single offline operator in a kernel space removes repeated regressions from the backward-in

Figure from the paper full image
abstract click to expand
We propose a scalable and theoretically grounded low-rank conditional expectation model for recursive Monte Carlo optimal stopping problems, in particular American option pricing. Our method reformulates the estimation of continuation values as a learning problem in a reproducing kernel Hilbert space, in which the conditional expectation is represented as a linear operator acting on future payoffs. This perspective yields an offline-online decomposition: the operator is learned once from simulated data and subsequently reused across all exercise dates, eliminating the need to recompute regression models at each step of the backward recursion. We establish convergence guarantees and derive bounds quantifying the approximation errors across exercise dates. Numerical experiments demonstrate the speed and accuracy of the proposed approach relative to extant methods.
0
0
math.NA 2026-05-08

Recycled subspaces solve imaging problems with uncertain geometry

Nonlinear RMM-GKS for Large-Scale Dynamic and Streaming Inverse Problems with Uncertain Forward Operators

The method jointly updates images and parameters using bounded memory for large dynamic datasets.

Figure from the paper full image
abstract click to expand
Many practical imaging systems suffer from uncertainty in acquisition geometry -- such as projection angles in computed tomography or sensor positions in photoacoustic tomography -- leading to nonlinear inverse problems that require joint estimation of both the image and the forward model parameters. Standard approaches that assume a known linear forward operator fail to account for these uncertainties, resulting in significant reconstruction artifacts. We propose a nonlinear recycled majorization-minimization generalized Krylov subspace (NL-RMM-GKS) framework for large-scale inverse problems with uncertain forward operators. The method extends MM-GKS to nonlinear settings by combining majorization-minimization for nonsmooth regularization with Krylov subspace projection and subspace recycling, ensuring bounded memory usage. Two complementary formulations are developed: an alternating minimization approach that alternates between image updates and Gauss-Newton parameter estimation, and a variable projection approach that eliminates the image variable and optimizes directly over the parameters using inexact inner solves. We further introduce streaming variants that process data sequentially, enabling reconstruction from large or dynamically acquired datasets without storing the full operator. For dynamic problems, we incorporate two temporal regularization strategies -- optical flow and anisotropic total variation -- as plug-in choices within the framework. We carry out rigorous numerical experiments in fan-beam computed tomography and photoacoustic tomography to demonstrate that our proposed framework achieves high-quality reconstructions with bounded memory requirements, making it suitable for large-scale dynamic imaging problems.
0
0
math.NA 2026-05-08

Bulk harmonic extension bounds CutFEM condition number independent of cut size

Stabilization and Operator Preconditioning of Bulk--Surface CutFEM via Harmonic Extension

The reduced operator stays symmetric and well-conditioned for the Laplace-Beltrami problem on a curve no matter how the background cells are

Figure from the paper full image
abstract click to expand
We present a cut finite element method (CutFEM) for the Laplace--Beltrami equation on a smooth closed curve $\Gamma\subset\mathbb{R}^2$ coupled to a harmonic bulk problem in $\Omega$ that requires \emph{no explicit stabilization}: no ghost penalty, normal-gradient penalty, or cell agglomeration. The classical ill-conditioning of trace finite element spaces on cut cells arises from basis functions with vanishingly small support on $\Gamma$; our observation is that coupling the surface discretization to a discrete bulk harmonic extension, realized through the lattice Green's function (LGF) on the background Cartesian grid, rigidly constrains the degrees of freedom responsible for this ill-conditioning. The reduced operator, obtained by a congruence transform of the full CutFEM stiffness, inherits symmetry and positive semi-definiteness from the variational form and has a condition number bounded uniformly in the smallest cut-cell ratio. The direct reconstruction has the standard $O(h^{-2})$ mesh conditioning; the single-layer density formulation acts as operator preconditioner and yields $O(1)$ conditioning, which is amenable to iterative solvers; the double-layer density formulation remains cut-independent with $O(h^{-2})$ scaling. We prove optimal $O(h)$/$O(h^2)$ error estimates in $H^1(\Gamma)$/$L^2(\Gamma)$ under standard regularity assumptions, establish the cut-independent conditioning rigorously, and demonstrate both the optimal convergence rate and robustness with respect to small cuts in numerical experiments.
0
0
math.NA 2026-05-08 Recognition

Complex composition raises BDF order by one and stability to eight

Error estimation for numerical approximations of ODEs via composition techniques. Part II: BDF methods

The imaginary part of the composed flow estimates error while the real part integrates to order p+1 with the same number of steps.

abstract click to expand
Integration of Ordinary Differential Equations (ODEs) using Backward Difference formula (BDF) methods with p backward steps achieves order p accuracy if specific conditions are met. This work extends the composition technique with complex coefficients to the implicit BDF schemes, increasing the approximation order by one without additional backward points. The imaginary part of the composed flow provides an error estimate of order p + 1. Linear stability analysis reveals that the composed schemes break the Dahlquist barrier, achieving stability up to order eight. The computational performance of the composed flow outperforms BDF schemes when using the same number of backward points, allowing for higher accuracy with lower CPU time. For non-uniform meshes, the ratio of consecutive time steps, which influences stability, appears as a parameter in the roots of algebraic equations relative to the composed flow. Having a complex root with a real positive part implies a lower bound to this ratio depending on the order. For example, the bound is 0.4506 for order three and 0.6806 for order four. Numerical tests demonstrate the effectiveness of this technique in improving the accuracy and stability compared to BDF methods.
0
0
math.NA 2026-05-08

Lower bounds on beam buckling loads from known interpolation constants

Two-sided eigenvalue bounds for the Euler-Bernoulli beam

Explicit error constants in finite elements give guaranteed lower estimates for eigenvalues of variable-stiffness beams.

abstract click to expand
We derive novel guaranteed lower bounds for eigenvalues of the Euler-Bernoulli beam with variable bending stiffness. While the standard finite element Rayleigh-Ritz method automatically yields upper bounds, we obtain lower bounds by employing interpolation error estimates with the explicitly known value of the associated constant. This approach is especially efficient and easy to apply for piecewise constant bending stiffness. For general variable material parameters, we obtain guaranteed lower bounds through an auxiliary beam-bending problem. The first eigenvalue is of primary interest in applications because it represents the critical load that causes buckling of the beam. Our method is, however, suitable also for the higher buckling modes. In addition, it can be applied to the physically more relevant nonlinear Gao beam model with piecewise constant bending stiffness, which has the same first eigenvalue as the classical Euler--Bernoulli beam. The presented numerical experiments illustrate the performance of the proposed eigenvalue bounds, demonstrating their convergence rates.
0
0
math.NA 2026-05-08

Bifocusing fails at 180-degree bistatic angle

Mathematical and experimental validation of the bifocusing method tailored for bistatic measurement

Bessel series proof shows the indicator function degenerates and targets become invisible exactly when angle reaches 180 degrees

abstract click to expand
In this paper, we design a bifocusing-based imaging strategy for the rapid identification of small penetrable dielectric inhomogeneities within a two-dimensional bistatic measurement setup. To address the applicability and limitation, we carefully explore the mathematical structure of the indicator function by establishing a relationship involving the infinite series of Bessel functions, the material characteristics, and the bistatic angle. Through this theoretical result, we rigorously verify that the imaging resolution degrades as the bistatic angle approaches $\SI{180}{\degree}$, and specifically, that target identification becomes impossible when the bistatic angle is $\SI{180}{\degree}$. Conversely, relatively high-resolution results are obtained when the bistatic angle is close to $\SI{0}{\degree}$. The theoretical findings are validated through numerical simulations using the Fresnel experimental dataset, which confirm the applicability and limitations of the proposed method for both dielectric and metallic objects.
0
0
math.NA 2026-05-08

Convex hulls give O(d/N) error for positive kernel quadrature

Convex-Geometric Error Bounds for Positive-Weight Kernel Quadrature

Reweighting an i.i.d. pool with simplex weights approximates kernel means at a rate sharper than Monte Carlo in fixed dimension and under f

Figure from the paper full image
abstract click to expand
Kernel quadrature can exploit RKHS spectral structure and outperform Monte Carlo on smooth integrands, but optimized quadrature weights are generally signed and may be numerically unstable. We study whether spectral acceleration remains possible when the weights are constrained to be positive, i.e., simplex weights. In the exact-target fixed-pool setting, an evaluated i.i.d. candidate pool of size $N$ is already available and the task is to reweight it so as to approximate the kernel mean embedding. We show that this positive reweighting problem is governed not by the equal-weight empirical average, but by the random convex hull generated by the pool. Our main geometric result shows that the mean of a bounded $d$-dimensional random vector can be approximated by a convex combination of $N$ i.i.d. samples at accuracy $O(d/N)$ with high probability, sharper than equal-weight averaging in the fixed-dimensional regime. We transfer this $d$-dimensional convex-hull approximation to full RKHS worst-case error through an augmented Mercer-truncation argument. The resulting positive-weight KQ bounds consist of a spectral tail term and a finite-sample convex-hull term, yielding Monte-Carlo-beating rates in favorable spectral regimes, including near-$O(1/N)$ rates up to logarithmic factors under exponential spectral decay. We also provide a constructive Frank--Wolfe algorithm that operates directly on the pool atoms, maintains simplex weights, and admits an explicit optimization-error bound.
0
0
math.NA 2026-05-08

IERK methods stay stable for long-time 2D incompressible flows

Long-time stability of implicit-explicit Runge-Kutta methods for two-dimensional incompressible flows

High-order implicit-explicit schemes bound vorticity in L2 and H1 norms with Hölder and Grönwall tools, allowing larger adaptive steps.

Figure from the paper full image
abstract click to expand
High-order adaptive time-stepping algorithms are of significant practical value and theoretical interest for accelerating long-time fluid-flow simulations and resolving complex dynamical behaviors. While several high-order implicit-explicit schemes have been proposed in the literature, their long-time stability properties remain largely unexplored. We develop a family of long-time stable implicit-explicit Runge-Kutta (IERK) methods, up to fourth-order temporal accuracy, for the two-dimensional incompressible Navier-Stokes equations in vorticity-stream function formulation. By combining a convolution-type H\"{o}lder inequality with a damping-type multistage Gr\"{o}nwall inequality, we establish a unified analytical framework that proves long-time stability in both the $L^2$ and $H^1$ norms. A key component of the analysis is a mathematical-induction argument that ensures stage-wise boundedness of the vorticity in the $H^\delta$ norm for some $\delta>0$. To the best of our knowledge, this is the first work to establish large-time stability results for high-order IERK algorithms for the two-dimensional incompressible Navier-Stokes equations. Our IERK schemes employ stiffly accurate diagonally implicit Runge-Kutta approximations for the linear diffusive term together with explicit Runge-Kutta approximations for the nonlinear advection term. By exploiting the specific structure of the Navier-Stokes model, we derive a reduced set of order conditions-requiring only 5 and 11 conditions for the third- and fourth-order methods, respectively, in contrast to the classical 6 and 18-allowing the construction of a parameterized family of efficient schemes. These IERK methods are particularly well suited for adaptive time-stepping, as they permit significantly enlarged step sizes in actual computations.
0
0
math.NA 2026-05-08

Weighted L2 projection errors bounded by H1 seminorm except for irregular weights

New error estimates of the weighted L² projections

Sharper estimates hold for general weight distributions and enable better analysis of solvers for PDEs with large coefficient jumps.

Figure from the paper full image
abstract click to expand
It is known that the weighted $L^2$ projection operator exhibits approximation properties different from those of the classical $L^2$ projection, in the sense that the $L^2$ error of the weighted $L^2$ projection of an $H^1$ function generally cannot be bounded by the $H^1$ semi-norm of the function. In this paper, we establish sharper $L^2$ error estimates for the weighted $L^2$ projection of an $H^1$ function under general weight distributions. These new estimates show that the $L^2$ errors of the weighted $L^2$ projection can be controlled by the $H^1$ semi-norm of the function, except when the weight distribution is highly irregular, such as those resembling a ``checkerboard" pattern. These results can be applied to more refined analyses of domain decomposition methods and multigrid methods for certain partial differential equations with large jump coefficients.
0
0
math.NA 2026-05-08

Kernel conditions ensure stability of high-order IEMS schemes

A semi-generating function approach to the stability of implicit-explicit multistep methods for nonlinear parabolic equations

Semi-generating function framework shows large implicit eigenvalues and small explicit norms yield unconditional stability for IEMS methods.

abstract click to expand
The rigorous stability analysis of high-order implicit-explicit multistep (IEMS) methods for nonlinear parabolic equations by using discrete energy arguments is a long standing open issue due to their non-A-stable property. A novel semi-generating function approach combined with the global discrete energy analysis is suggested to the stability and convergence analysis of general IEMS methods for nonlinear parabolic equations. Inspired from the Grenander-Szeg\"{o} theorem for the Toeplitz matrix, the semi-generating function approach is used to handle the three groups of discrete coefficients via three complex rational polynomials on the unit circle. A unified theoretical framework is then presented to establish the unconditional stability of IEMS methods if the minimum eigenvalue of composite convolution kernels for the implicit part is properly large and the spectral norm bound of composite convolution kernels for the explicit part is properly small. An indicator, called implicit-explicit controllability intensity, is then introduced to evaluate the degree of controllability of implicit part over explicit part. Some of existing IEMS methods, up to the fifth-order time accuracy, are revisited and compared by computing the associated implicit-explicit controllability intensities such that one can choose certain IEMS method or proper parameter to maintain the unconditional stability for a specific nonlinear parabolic model. We also propose a new parameterized class of IEMS methods, up to the eighth-order time accuracy, which satisfy the priori settings of our theory and have a large value of the implicit-explicit controllability intensity by choosing proper parameter so that they would be well suited for a wide class of nonlinear parabolic problems.
0
0
math.NA 2026-05-08

Discretizing matrix-valued kernels from zonal functions on the sphere produces…

Vector field multiplier operators and matrix-valued kernel quasi-interpolation

Approximation of div- and curl-free fields on the sphere avoids integrals and linear solves while handling noise.

Figure from the paper full image
abstract click to expand
We develop and analyze a class of matrix-valued spherical-convolution kernels stemming from scaled zonal functions on $\mathbb{S}^2,$ the unit sphere embedded in $\mathbb{R}^3$. The construct of these kernels utilizes the Legendre differential equation and requires less stringent regularity conditions on the original zonal kernels. The induced integral operators are simple Fourier-Legendre multipliers that not only deliver optimal Sobolev error estimates (in terms of the scaling parameter) but also yield natural Helmholtz-Hodge decompositions on the $L_2$-tangential vector fields on $\mathbb{S}^2$. Via discretization of the underlying convolution integrals, we harvest a family of vector-valued quasi-interpolants that accomplish our approximation goal in the divergence/curl-free vector field. The quasi-interpolation algorithm is robust against noisy data. The implementation process is adaptive to human-improvision, involving neither evaluating the convolution integrals nor solving systems of linear equations. The computational efficiency and executory robustness of the quasi-interpolation algorithm stand in sharp contrast to the existing kernel-based vector field interpolation method.
0
0
math.NA 2026-05-08

Fixed-point iteration recovers space-time sources in subdiffusion

Numerical Analysis of Space-Time Dependent Source Identification in Subdiffusion Equations

The scheme converges linearly and bounds the reconstruction error by discretization size and data noise

Figure from the paper full image
abstract click to expand
In this work, we propose an easy-to-implement fixed-point algorithm for reconstructing a space-time dependent source in a subdiffusion model from lateral boundary measurements. The numerical scheme combines a Galerkin finite element method for spatial discretization with a finite difference method for temporal discretization. We establish the linear convergence of the fixed-point iteration and derive an error bound that depends explicitly on the discretization parameters and the noise level. The error analysis relies on stability properties of the continuous inverse problem and technical estimates for the associated direct problem with limited-regularity data. Numerical experiments are presented to support and complement the theoretical analysis.
0
0
math.NA 2026-05-08

Double splitting iteration solves large indefinite least squares faster

The double splitting iteration method for solving the large indefinite least squares problem

Splitting the normal equations twice cuts iterations and raises robustness over single-splitting schemes for indefinite systems.

Figure from the paper full image
abstract click to expand
Addressing large-scale indefinite least squares (ILS) problem poses notable computational bottlenecks in the field of numerical linear algebra. State-of-the-art iterative schemes for such problems are predominantly constructed upon the single splitting of the coefficient matrix derived from the corresponding normal equation. In this work, we put forward an innovative iterative framework grounded in the double splitting of normal equations tailored for ILS problem. Specifically, we elaborate on a distinct implementations of the double splitting strategy, which offer constructive insights and methodological references for subsequent research on double splitting-based iterative methods. Two numerical experiments further corroborate that the proposed double splitting iterative paradigm outperforms conventional single splitting approaches in both computational efficiency and convergence robustness.
0
0
math.NA 2026-05-07

HDG method yields symmetric stresses for poroelastic waves

Hybridizable discontinuous Galerkin methods for poroelastic wave propagation with symmetric stress approximation

Reformulation to a symmetric hyperbolic system plus HDG+ and LDG-H discretization produces robust optimal rates after condensation to traces

Figure from the paper full image
abstract click to expand
In this paper, we develop hybridized discontinuous Galerkin (HDG) methods for poroelastic wave equations. We first rewrite the governing equations to a first-order symmetric hyperbolic system in order to use dual mixed formulations for discretization. Subsequently, we combine two HDG approaches in the discretization of the system, the $\text{HDG}+$ method for the linear elasticity equations and the $\text{LDG-H}$ method for the diffusion equations, with adjustments for the poroelastic wave equations. In our proposed HDG methods, the numerical approximation of the stress tensor is strongly symmetric and the convergence of the errors are robust for nearly incompressible materials. Upon performing static condensation, the system retains numerical trace variables solely for the solid displacement and the fluid pressure. We provide comprehensive error analyses for both the semidiscrete formulation and the Crank--Nicolson time-stepping scheme. Finally, extensive numerical examples illustrate optimal convergence results and simulate different poroelastic wave propagation scenarios relevant in the literature.
0
0
math.NA 2026-05-07

Finite elements deliver explicit two-sided Schrödinger eigenvalue bounds

Explicit Two-Sided Eigenvalue Bounds for Schr\"odinger Operators with Singular Potentials via Finite Element Method

Domain truncation plus an enriched Crouzeix-Raviart scheme produces computable upper and lower estimates whose gap shrinks with mesh size.

Figure from the paper full image
abstract click to expand
We present, to the best of our knowledge, the first numerical algorithm for explicit, computable two-sided eigenvalue bounds for Schr\"odinger operators H = -Delta + V on R^N, N = 2,3, in the presence of both an unbounded potential and an unbounded domain. "Explicit" here means that all constants and ingredients are derived in closed form from the mesh, the potential, and a small set of explicit inequalities (Payne-Weinberger, Hardy, and explicit bounded-domain Sobolev embeddings); the conversion to fully verified(IEEE-754-safe, interval-arithmetic) enclosures is a separate verification step and is left for future work. In particular, singular attractive potentials of Coulomb type, V(x) = -Z/|x|, which model the hydrogen atom and the H_2^+ molecular ion, are covered by the theory. The method combines domain truncation to a bounded domain D(R) containing {|x| <= R} with an extension of Liu's Composite Enriched Crouzeix-Raviart (CECR) finite element method to sign-indefinite potentials. Upper bounds come from the standard conforming Galerkin method; lower bounds come from the CECR construction, whose gap to the exact eigenvalue closes as the mesh is refined. Numerical experiments on the 2D single- and two-centred Coulomb potentials and on the 3D hydrogen atom and H_2^+ molecular ion illustrate the algorithm and confirm the predicted convergence.
0
0
math.NA 2026-05-07

Linear BDF2 scheme reaches optimal rates for LLG equation

BDF2-type integrator for Landau-Lifshitz-Gilbert equation in micromagnetics: a-priori error estimates

The method converges at second order in time to both weak and strong solutions while requiring only one linear solve per step.

Figure from the paper full image
abstract click to expand
We consider the Landau-Lifshitz-Gilbert equation (LLG), which models time-dependent micromagnetic phenomena. We analyze a fully discrete scheme that combines first-order finite elements in space with a BDF2 method in time. The method requires the solution of only one linear system of equations per time step and does not enforce the pointwise unit-length constraint of the magnetization. While unconditional weak convergence has been analyzed in an earlier work, we now prove optimal-order convergence rates under sufficient regularity assumptions on the exact solution and the external field. In combination with our previous work, this establishes the first higher-order-in-time and linear integrator that converges both to weak and strong solutions of LLG. Numerical experiments confirm first-order convergence in space and second-order convergence in time.
0
0
math.NA 2026-05-07

Nyström beats Rayleigh-Ritz for leading eigenvalues of PSD matrices

Finding accurate eigenvalues and eigenvectors of positive semi-definite matrices given a subspace

Given a subspace approximation, Nyström delivers strictly higher accuracy at identical cost, with gains that grow unbounded for fast-decayng

abstract click to expand
We revisit a classical problem in numerical linear algebra: given an $k$-dimensional subspace $\mathcal{Q}$ that approximates the leading eigenspace of an $n\times n$ positive semi-definite matrix $A$, the goal is to extract high-accuracy eigenvalues. The Rayleigh-Ritz (RR) method is the standard algorithm for the task, which has been shown to be optimal in several ways (when $A$ is symmetric, not necessarily positive semi-definite $A\succeq 0$). In this paper, we show that when $A \succeq 0$, alternative methods can outperform RR, while having the same computational complexity, that is, the main cost is in computing $AQ$, plus an $O(nk^2)$ term. In particular, we advocate the use of Nystr{\"o}m's method, showing that the approximate eigenvalues always have higher accuracy than RR, and the improvement can be arbitrarily large. The difference is significant, especially when $A$ has a fast-decaying spectrum. A similar improvement is numerically observed for the purpose of approximating the leading eigenvectors. In contrast, when the target eigenvalues are the trailing ones, the situation is reversed, and the Nystr{\"o}m method performs poorly; we suggest a remedy for this situation.
0
0
math.NA 2026-05-07

Blending shuts off redistribution at steady state

Update-Magnitude State Redistribution (UM-SRD): A Shut-off Extension of Weighted SRD for Cut-Cell Methods

UM-SRD turns the SRD correction off when finite-volume updates vanish, preserving exact equilibria and stabilizing small cut cells under the

Figure from the paper full image
abstract click to expand
Berger & Giuliani (2024) developed a provably stable weighted state redistribution (SRD) algorithm for cut-cell meshes. A key limitation of their method is that, although flux redistribu- tion naturally vanishes when updates are small, SRD continuously applies redistribution even when the flux balance is zero, preventing exact steady-state preservation and potentially in- troducing unnecessary dissipation in smooth regions. This work introduces Update-Magnitude State Redistribution (UM-SRD), which blends the SRD operator with the identity operator via a smooth, locally-defined scalar indicator of the finite-volume update magnitude. UM-SRD preserves conservation and reduces exactly to the base scheme when the finite-volume update is exactly zero in a small-cell neighborhood. For a one-dimensional model problem with a single small cut cell, we prove UM-SRD is total variation diminishing under the same CFL condition as the base upwind scheme, show the local truncation error modification is higher-order in smooth regions with the unnormalized indicator, and show that the normalized implementation pre- serves first-order accuracy. Numerical experiments demonstrate convergence toward first order on smooth 1D and 2D advection tests, confirm shut-off behaviour, verify non-oscillatory proper- ties, provide numerical evidence that UM-SRD stabilizes the base scheme near a small cut cell where the base scheme diverges, and confirm exact steady-state preservation. The algorithm reuses existing weighted SRD infrastructure, adding only a local blending mechanism, making it practical for cut-cell finite-volume codes.
0
0
math.NA 2026-05-07

Schwarz-like methods converge on degenerate elliptic-parabolic equations

Convergence analysis of Schwarz-like methods for degenerate elliptic-parabolic equations

A monotone-operator framework shows the space-time decompositions reach the solution of p-structured nonlinear diffusion problems.

abstract click to expand
Convergence is proven for Schwarz-like methods applied to degenerate elliptic-parabolic equations with a $p$-structure. This family of PDEs, e.g., arises when modelling nonlinear diffusion processes. The Schwarz-like approximation methods are based on decomposing the space-time domain into overlapping subdomains, which enables parallel implementations. The methods are derived by introducing a pseudo-time component and applying time integrators of splitting type, which are time stepped towards infinity. This approach of decomposing the space-time domain is related to Schwarz waveform relaxation methods, but the methods considered here have the advantage that they can be proven to converge when applied to nonlinear parabolic, or even degenerate elliptic-parabolic, PDEs. We prove convergence by deriving a nonlinear framework based on the abstract theory for monotone operators and the existence theory for degenerate elliptic-parabolic equations.
0
0
math.NA 2026-05-07

Residual estimator bounds error for polygonal finite elements

An Adaptive Finite Element Method Based on Generalized Barycentric Coordinates

Proving both upper and lower bounds on discretization error justifies adaptive refinement on arbitrary polygons.

Figure from the paper full image
abstract click to expand
This work derives a posteriori error estimate of polygonal finite element methods based on Wachspress barycentric coordinates. In particular, we prove that the classical residual-based a posteriori error estimator is both an upper and lower bounds for the discretization error. The analysis relies a Scott-Zhang type interpolation and homogeneity arguments for rational functions on polygonal elements. Numerical experiments on square and L-shaped domains demonstrate the effectiveness of the adaptive algorithm.
0
0
math.NA 2026-05-07

Smoothing lifts finite element accuracy by one order

Superconvergence in finite element method by smoothing

Embedding the solution in a richer space and running a few Jacobi or Gauss-Seidel steps delivers superconvergence for symmetric positive-def

Figure from the paper full image
abstract click to expand
This paper develops a smoothing-based postprocessing method for superconvergence in finite element methods. The method applies a few smoothing iterations, such as damped Jacobi, Gauss-Seidel, or conjugate gradient, with initial guess being the current finite element solution embedded in an enriched finite element space. The resulting procedure is algebraic, easy to implement, and applicable to high-order and three-dimensional discretizations. For symmetric and positive-definite problems, we prove superconvergence of the smoothed solutions under additive and multiplicative smoothers. Effectiveness of the proposed method is demonstrated by numerical experiments for the Poisson, Maxwell, biharmonic and Helmholtz equations.
0

browse all of math.NA → full archive · search · sub-categories