Topography at the core-mantle boundary (CMB) couples the outer core to the mantle and likely generates observable variations in the length of day ($\Delta$LOD) and the geomagnetic field, though these effects remain poorly understood. We use direct numerical simulations of rotating shell convection with finite-amplitude CMB topography to investigate dynamical effects on the outer core. A range of topographic shapes is used, including individual spherical harmonics and a model representing seismically inferred heterogeneities in the deep mantle. As predicted by prior linear theory in the rotating annulus model, a new instability arises for Rayleigh numbers below the onset of convection; we confirm its existence in a global geometry, though the predicted scalings are quantitatively modified. The shape of the geostrophic contours -- lines of constant axial height -- plays a central role: deformed contours allow buoyancy to do work on the time-averaged flow, driving increases in Reynolds and Nusselt numbers of up to $\sim$100\% relative to a spherical boundary. Previous work showed that topographic torques scale linearly with topographic amplitude and quadratically with flow speeds; we confirm this scaling and extend it with new theory that estimates the torques for global, spectrally broad topography. When extrapolated to core conditions, the predicted torques are consistent with the magnitude required to drive observed decadal and subdecadal $\Delta$LOD variations.
Structural properties of the porous medium allow the three-coefficient friction tensor to be replaced by a single value while preserving fit
abstractclick to expand
A numerical validation of the stress-jump coupling conditions for Stokes-Darcy flow in two dimensions is presented, addressing a gap that has remained since their introduction by Angot et al.. These conditions, formulated for arbitrary flow directions at the interface between a porous medium and an adjacent free-flow region, involve a friction tensor whose coefficients are not known a priori. We calibrate these parameters for a range of porous-medium configurations and flow regimes by matching the macroscopic model to reference solutions derived from processed pore-scale simulations. Several optimization strategies are assessed for this calibration task. The results show that, although three parameters are formally required, exploiting structural properties of the porous medium enables an effective reduction to a one-dimensional calibration with negligible loss in accuracy. A regional sensitivity analysis further indicates that even coarse parameter estimates can yield a well-performing model, highlighting the robustness and practical applicability of the stress-jump formulation.
Iron disproportionation reactions in mantle silicates can produce metallic iron that drives Earth's deep mantle toward metal saturation under reduced conditions. Subducting slabs transport hydrated silicates to these depths, where interactions with metallic iron can reduce structurally bound hydrogen in silicates to reduced hydrogen-bearing phases, such as molecular hydrogen or iron hydrides, leaving mantle rocks in effect dry. Using the thermodynamic code HeFESTo with its latest self-consistent treatment of iron-bearing mantle phases, we investigate the stability and distribution of metallic iron in Earth's pyrolitic mantle across a broad range of oxidation states, represented by whole-rock Fe3+/$\Sigma$Fe ratio from 1% to 10%. We find that metallic iron is present through much of the lower mantle across this range and, under very reduced compositions of whole-rock Fe3+/$\Sigma$Fe = 1-3%, extends into the upper mantle. Where subducted water meets metal-saturated regions, hydrous melts may form and migrate upward, rehydrating the overlying mantle or pooling near the transition zone. Metal saturation can thus redistribute hydrogen internally, creating a sharp contrast between a wet shallow mantle and a dry deep mantle. This redox-driven redistribution can decrease mantle silicate water storage capacity by 64-96% today, to only 0.1-0.8 modern ocean masses, and may explain the viscosity contrast near the upper-lower mantle boundary. Although quantitative estimates of metal abundance and distribution depend on thermodynamic assumptions and remain uncertain above 50 GPa, our results reveal the role of redox reactions between disproportionated iron and subducted water in governing the speciation and redistribution of hydrogen in Earth's mantle.
Self-supervised learning (SSL) has emerged as a promising approach to seismic data denoising as it does not require clean reference data. In this work, the deployment of the Noisy-as-Clean (NaC) method was evaluated for real seismic data denoising under controlled conditions. Two independent seismic acquisitions, each comprising noisy and filtered data, were organized into four real datasets. The NaC SSL method was adapted to add real noise to the noisy input, controlled by a parameter. An experimental protocol with ten experiments was designed to compare different strategies for deploying the NaC SSL method with the supervised learning baseline, using identical network topology and hyperparameters. The models were evaluated in terms of denoising performance, computational cost, and generalization capability. The results show that the synthetic additive white Gaussian noise (AWGN) is inadequate for the denoising of seismic data within the NaC method, and performance strongly depends on the compatibility between the injected and actual noise characteristics. Furthermore, both the characteristics of the seismic data and the noise level influence the performance of the model. Self-supervised fine-tuning on test data has improved SSL performance, whereas no such gain was observed for fine-tuning of supervised models. Finally, NaC has shown to be a simple, effective, and model-independent method that offers a feasible solution for the denoising of real seismic data.
Abyssal hills, arguably the most extensive coherent pattern in Earth's surface topography, record the spacing of normal faults formed at mid-ocean ridges. At fast-spreading ridges, high-resolution bathymetry shows a pronounced spectral peak near 41 ky, coincident with obliquity-paced Pleistocene sea-level variability. The origin of this apparent orbital imprint on seafloor structure remains unresolved. We hypothesise that glacial-interglacial sea-level variability influences fault spacing by modulating plate thickness and the flexural stresses produced during plate unbending.
Sea-level change alters mantle melting rates and magma supply at ridge axes, generating variations in the properties of the accreting plate. As the plate moves off axis, it unbends from its ingrown curvature, producing tensile fibre stresses that drive normal faulting. We hypothesise that small perturbations in elastic plate thickness modulate these stresses and thereby influence fault spacing. To test this, we extend the elastic unbending theory of Buck (2001) to include spatially variable plate thickness and yield-weakening viscoplastic flexure, which localises deformation into discrete kinks interpreted as faults. Linearised analysis shows that plate-thickness perturbations generate proportional fibre-stress variations. Numerical solutions demonstrate that perturbations as small as approximately 0.1 percent can phase-lock faulting to the imposed forcing. When driven by plate-thickness perturbations derived from the Pleistocene oxygen-isotope record, the model predicts fault spacings concentrated near 41 ky in the early Pleistocene and near 100 ky in the late Pleistocene, consistent with observed abyssal-hill spectra. These results provide a quantitative mechanism by which glacial-interglacial sea-level variability can be transmitted into tectonic structure.
Travel-time tomography forces a trade-off between mesh resolution and stability in which the regularizer choice dominates what can be recovered. We introduce MIMIR, a differentiable framework that represents the 2D velocity field as a Fourier-feature neural network, replacing the grid-based slowness vector with a continuous, infinitely differentiable function. Prior neural-field tomography has staircased smooth fields under total-variation (TV) priors or oscillated near interfaces under $L^2$ Laplacian smoothing. We adopt second-order total generalized variation (TGV$^2$) and parametrize its auxiliary vector field as a second neural network jointly optimized with the velocity field, eliminating the inner Chambolle-Pock primal-dual loop that classically dominates TGV computation. On three synthetic benchmarks (Gaussian, horizontally layered, curved-fault inspired by OpenFWI) using cross-well acquisition, 5% travel-time noise, and five seeds, MIMIR-TGV$^2$ ties a classical FMM-LSMR baseline with auto-tuned hyperparameters on the Gaussian ($p=0.134$, paired $t$-test) and significantly outperforms it on layered ($p<0.0001$, 44% RMSE reduction) and curved-fault ($p=0.0002$, 33% reduction). Replacing TGV$^2$ with TV degrades performance on Gaussian ($p=0.004$) and layered ($p=0.003$); curriculum-annealed TV improves Gaussian RMSE by only 5.4%, confirming that TV's staircase bias is intrinsic to the regularizer rather than a scheduling artifact. The results empirically validate the Bredies-Kunisch-Pock prediction that piecewise-affine priors are better suited to subsurface velocity recovery than piecewise-constant TV priors. We argue that the central design choice in physics-informed neural-field inversion is not the network architecture but the regularizer. The full pipeline reproduces in under one hour on consumer hardware.
Full waveform inversion (FWI) can produce accurate subsurface velocity models. However, the lack of sufficiently low-frequency content in field data often causes cycle skipping and traps the inversion in local minima. The Hilbert-transform envelope (HTE) provides a low-frequency representation that helps mitigate cycle skipping, but it may be insufficient when the initial velocity model is highly inaccurate. To further enhance low-frequency information and reduce dependence on the initial model, we compute an approximate envelope using a sequence of 2D max-pooling operations. Compared with HTE, the resulting max-pooling-based approximate envelope (MPBAE) contains richer low-frequency components and better mitigates cycle skipping. We further combine the MPBAE loss with a shot patching strategy and exploit the inherent normalization property of the Euclidean loss to formulate the MPBAEP loss, in which each shot gather is divided into localized patches for misfit evaluation. This introduces local adjoint-source energy balancing, as the adjoint source associated with the Euclidean loss exhibits a normalization effect within each local region, thereby improving gradient balance and accelerating convergence. Numerical experiments on synthetic and field data demonstrate that MPBAE-FWI significantly outperforms HTE-FWI when the initial model is poor, while MPBAEP-FWI further improves inversion accuracy.
Physics-informed neural networks (PINNs) provide a mesh-free framework for solving PDE-constrained inverse problems, but their extension to Bayesian inversion still faces a fundamental difficulty: prior distributions are typically defined in the weight space of neural networks, whereas physically meaningful prior assumptions are more naturally expressed in function space. In this study, we introduce a unified framework, termed functional-prior-based approaches to Bayesian PDE-constrained inversion using physics-informed neural networks (fpBPINN), to incorporate functional priors into Bayesian PINN-based inversion. We consider two complementary approaches. The first is a functional-prior-informed Bayesian PINN (FPI-BPINN), in which a neural network weight prior is learned to be consistent with a prescribed functional prior, and Bayesian inference is subsequently performed in weight space. The second is function-space particle-based variational inference for PINNs (fParVI-PINN), which performs Bayesian estimation using ParVI directly in function space. We also show that random Fourier features (RFF) play an important role in representing Gaussian functional priors with neural networks and in improving posterior approximation. We applied the proposed approaches to one-dimensional seismic traveltime tomography and two-dimensional Darcy-flow permeability inversion. These numerical experiments showed that both approaches accurately estimated posterior distributions, highlighting the significance of introducing physically interpretable functional priors into Bayesian PINN-based inverse problems. We also identified the contrasting advantages of FPI-BPINN and fParVI-PINN, namely flexibility and accuracy, respectively.
We present a novel procedure for generating synthetic well logs that simultaneously preserves multivariate correlations among petrophysical properties (Density, P-Sonic, S-Sonic) and vertical stacking patterns of electrofacies. The methodology integrates Markov chain models, autoencoder-based dimensionality reduction, and Markov chain Monte Carlo (MCMC) sampling in latent space. Application to a real turbidite reservoir dataset demonstrates that the framework successfully sustains fundamental rock physics relationships and generates geologically realistic vertical heterogeneity consistent with actual well log measurements. This technique addresses critical data scarcity in machine learning applications for seismic interpretation while enabling credible synthetic seismogram generation for scenario testing and uncertainty quantification in petroleum exploration and field development.
Snow slab avalanches are among the most dangerous natural hazards in mountain areas. Recent progress in numerical modelling, field measurements, and large-scale fracture experiments has renewed interest in shear-failure interpretations of avalanche release, particularly in connection with dynamic crack propagation and supershear fracture. Yet most existing stress-based models either assume a perfectly brittle stress drop, neglecting post-peak energy dissipation, or neglect weak-layer pre-peak elasticity, which influences stress redistribution and critical crack length. Here, we derive an analytical solution for shear-failure propagation in a weak layer beneath an elastic snow slab, explicitly accounting for finite post-peak softening and elastic mismatch between slab and weak layer. Building on the one-dimensional weak-spot framework of Gaume et al.\ (2013), we consider a symmetric failure composed of a fully softened zone, a fracture process zone with linear softening, and an intact elastic region. In the limit of vanishing softening displacement $\delta$, the model recovers the classical stress-based critical length $a_{c0}$. For finite softening, the solution distinguishes between the fully softened crack length $a_c$ and the total affected length $b_c$, which includes the fracture process zone. The formulation provides a direct analytical link between weak-spot and fracture-energy approaches, since fracture energy enters through the constitutive softening law itself. For small softening, the exact solution yields the compact approximation $a_c \simeq a_{c0}\sqrt{1+C_a\delta/u_p}$. This distinction is important when comparing with numerical models that may identify the full damaged region rather than the fully softened zone alone.
Pix2Pix-based Pix2Geomodel transfers to a stricter reservoir dataset while preserving dominant facies-property spatial patterns, with best…
abstractclick to expand
Reservoir geomodeling is central to subsurface characterization, but it remains challenging because conditioning data are sparse, geological heterogeneity is strong, and conventional geostatistical workflows often struggle to capture nonlinear relationships between facies and petrophysical properties. This study evaluates the robustness and transferability of Pix2Geomodel on a different and more complex reservoir dataset with reduced vertical support. The new case includes a heterogeneous reservoir-quality classification and only 54 retained layers, providing a stricter test of whether Pix2Pix-based image-to-image translation can preserve facies-property relationships under constrained data conditions. Facies, porosity, permeability, and clay volume (VCL) were extracted from a reference reservoir model, exported as aligned two-dimensional slices, augmented using consistent geometric transformations, and assembled into paired image datasets. Six bidirectional tasks were evaluated: facies to porosity, facies to permeability, facies to VCL, porosity to facies, permeability to facies, and VCL to facies. The Pix2Pix model, consisting of a U-Net generator and PatchGAN discriminator, was evaluated using image-based metrics, visual comparison, and variogram-based spatial-continuity validation. Results show that the model preserves the dominant geological architecture and main spatial-continuity trends. Facies to porosity achieved the highest pixel accuracy and frequency-weighted intersection over union of 0.9326 and 0.8807, while VCL to facies achieved the highest mean pixel accuracy and mean intersection over union of 0.8506 and 0.7049. These findings show that Pix2Geomodel can transfer beyond its original case study as a practical framework for rapid bidirectional facies-property translation in complex reservoir modeling.
Characterising the noise of an airborne electromagnetic (AEM) system is critical in correctly imaging the earth's subsurface conductivity. Deterministic and probabilistic geophysical inversion algorithms require foreknowledge of the system noise to specify stopping criteria or a valid model likelihood. Repeat flight lines provide a way for geophysicists to calculate the statistical variability in AEM data acquired over the same ground, and therefore estimate the levels of noise to propagate into the inversion. The total noise can be separated into multiplicative and additive components. The multiplicative noise is derived by repeat lines at survey altitude. The method to calculate the multiplicative noise is scarcely documented and usual methods for height correcting acquired data require a linear trend removal. This study will outline the algorithm used to estimate multiplicative noise of an AEM system, and non-linearly correct for varying altitudes during repeat flights. Additionally, this paper details a methodology to Gaussianise the data noise and provide a statistically valid Gaussian data misfit or likelihood function. Significantly, we provide methods for estimating the off-diagonal elements in the data covariance matrix used within the misfit function, taking into account the time-channel data correlation that is usually neglected. While our methodology is general, our study of a rotary-wing system leads us to conclude that for regularised time-domain AEM imaging, a diagonal data covariance suffices -- an important implication for rigorous yet practical AEM inversion.
Review shows intelligent interpolation and cycle-skipping suppression enable better ocean-bottom and streamer surveys.
abstractclick to expand
Marine seismic exploration is a core technology supporting marine resource exploration, seabed detection, carbon sequestration monitoring, and offshore engineering safety. The integration of full-waveform inversion (FWI), elastic inversion, numerical modeling, and artificial intelligence is driving a paradigm shift from physics-driven to physics-constrained and data-driven hybrid mode. Based on the JMSE special issue Modeling and Waveform Inversion of Marine Seismic Data, this paper systematically reviews 11 papers across six areas: data preprocessing, forward modeling, FWI, elastic inversion, reservoir characterization, and migration imaging. Results show that intelligent interpolation, multi-source joint inversion, low-frequency recovery and cycle-skipping suppression, physics-guided deep learning inversion, and wide-band velocity modeling are key solutions to industrial bottlenecks in OBN/OBC, streamer, and passive-source scenarios. These achievements form a complete system from theory to engineering application, supporting deep-water exploration, seabed hazard detection, and carbon sequestration monitoring. This paper also introduces the new JMSE special issue Marine Geophysical Exploration in the Era of Artificial Intelligence, summarizes recent AI-based advances, and prospects future trends of AI and marine seismic integration.
Relative Geologic Time (RGT) estimation from seismic data is a cornerstone of subsurface structural modeling, depositional evolution analysis, and reservoir characterization, supporting horizon correlation and depositional system reconstruction. Yet accurate RGT estimation remains challenging: RGT is intrinsically a topologically constrained continuous field, in which local errors readily propagate globally and distort the overall result. Conventional methods rely heavily on priors, attribute extraction, and manual interaction, leading to cumbersome workflows. Existing deep-learning approaches mostly use a regression formulation with pixel-wise MSE/MAE losses, which struggle to capture thin horizons and fail to model the stratigraphic semantics of the RGT field, yielding limited generalization and unstable ordering across diverse structural and depositional settings. We propose RGT-Est, a deep-learning framework that transfers the optimization target from the topologically constrained continuous field into a differentiable sinusoidal space, which explicitly encodes the periodic stratigraphic semantics of RGT and alleviates over-smoothing of fine horizons. Pointwise, perceptual, and adversarial losses are jointly imposed in this space to enforce local fidelity, inter-layer consistency, and global structural plausibility, providing both fine-horizon discrimination and global stratigraphic awareness. An optional horizon-guidance module further accepts sparse 2D or 3D horizons as priors. Trained on synthetic data and evaluated on field surveys with densely faulted zones, large unconformities, steeply dipping strata, folded deformations, and clinoforms, RGT-Est achieves state-of-the-art performance among AI-based methods without horizon constraints, and attains substantially higher horizon-correlation accuracy and global topological consistency once sparse priors are incorporated.
Relative Geologic Time (RGT) estimation from seismic data is a cornerstone of subsurface structural modeling, depositional evolution analysis, and reservoir characterization, supporting horizon correlation and depositional system reconstruction. Yet accurate RGT estimation remains challenging: RGT is intrinsically a topologically constrained continuous field, in which local errors readily propagate globally and distort the overall result. Conventional methods rely heavily on priors, attribute extraction, and manual interaction, leading to cumbersome workflows. Existing deep-learning approaches mostly use a regression formulation with pixel-wise MSE/MAE losses, which struggle to capture thin horizons and fail to model the stratigraphic semantics of the RGT field, yielding limited generalization and unstable ordering across diverse structural and depositional settings. We propose RGT-Est, a deep-learning framework that transfers the optimization target from the topologically constrained continuous field into a differentiable sinusoidal space, which explicitly encodes the periodic stratigraphic semantics of RGT and alleviates over-smoothing of fine horizons. Pointwise, perceptual, and adversarial losses are jointly imposed in this space to enforce local fidelity, inter-layer consistency, and global structural plausibility, providing both fine-horizon discrimination and global stratigraphic awareness. An optional horizon-guidance module further accepts sparse 2D or 3D horizons as priors. Trained on synthetic data and evaluated on field surveys with densely faulted zones, large unconformities, steeply dipping strata, folded deformations, and clinoforms, RGT-Est achieves state-of-the-art performance among AI-based methods without horizon constraints, and attains substantially higher horizon-correlation accuracy and global topological consistency once sparse priors are incorporated.
The Gravity Recovery and Climate Experiment (GRACE) and GRACE Follow-On missions provide monthly terrestrial water storage anomaly (TWSA) estimates for monitoring large-scale water storage change. The monthly temporal resolution of official products limits the analysis of high-frequency hydrological events, while existing daily GRACE products often have reduced spatial resolution due to sparse groundtrack coverage and required smoothing and regularization. This study introduces D-SHIFT (Daily Spatial High-Resolution Inference via Feature Transformation), a deep learning-based framework for generating daily, high-resolution TWSA fields from daily spherical harmonic coefficient (SHC) solutions. The model is trained in the monthly domain by using low-resolution daily solutions and other auxiliary features as inputs, while targeting on monthly mascon products. The model is then applied to daily SHC inputs to generate products with similar spatial resolution of monthly products. Monthly validation against mascon products gives a global mean root mean square error of about 2.3cm, with good correlation and explained variance agreement. Daily analyses show that D-SHIFT produces spatially coherent day-to-day fields and improves basin-scale trend and seasonality estimates compared with low-resolution SHC. The basin-area double-difference analysis indicates that these gains are most relevant for spatially localized signals affected by smoothing and leakage. In Greenland, D-SHIFT better reproduces coastal mass-loss patterns and gives a basin-mean trend of -10.5cm/yr, close to the CSR Monthly value of -12.0cm/yr.
The demand for high-resolution subsurface imaging and continuous Earth monitoring has driven rapid growth in active and passive seismic data from dense geophone deployments, distributed acoustic sensing (DAS) arrays, and large-scale 2D and 3D surveys. This expansion makes complex noise suppression increasingly challenging, especially when signal fidelity must be preserved. Conventional supervised deep learning methods are often task-specific, require large paired datasets, and can suffer from domain shift under new acquisition conditions. Foundation models offer a promising alternative, but pre-training seismic foundation models from scratch requires massive domain-specific data and substantial computation. We propose an efficient framework that repurposes general-purpose Vision Foundation Models (VFMs) for geophysical tasks through Parameter-Efficient Fine-Tuning. The architecture uses a pre-trained VFM, a DINOv3 encoder, adapted with Low-Rank Adaptation (LoRA) to enable effective feature adaptation with few additional parameters. To improve robustness under unseen field conditions without ground truth, we introduce a kurtosis-guided unsupervised test-time adaptation module that updates only LoRA parameters during inference. This module self-calibrates the model to site-specific noise by identifying information-rich regions via kurtosis and performing self-training without labeled data. Experiments on public exploration seismic images and DAS vertical seismic profiling data from the Utah FORGE site show that the framework matches or outperforms domain-specific models. Tests on unseen cross-site data from a land survey in China and the Gro{\ss} Sch\"onebeck geothermal site in Germany further demonstrate strong generalization and effective signal-noise separation. These results highlight the potential of adapting pre-trained VFMs to data-intensive problems in exploration seismology.
Effective suppression of surface-related multiples is essential to prevent imaging artifacts and erroneous structural interpretations. While conventional approaches rely on accurate priors or subsurface model knowledge, and supervised learning methods require labeled data that are impractical to obtain for real seismic data. To overcome these limitations, a recently proposed self-supervised learning (SSL) framework integrates multi-dimensional convolution (MDC) for multiple generation with a two-stage training strategy, eliminating the need for both prior knowledge and labeled data. However, their approach requires manual selection of a scaling factor to match the amplitudes between the MDC-generated multiples and the true multiples, thus introducing subjectivity and limiting its practical applicability. In this study, we propose an adaptive SSL method that treats the scaling factor as a learnable parameter, jointly optimized with the network weights in a unified single-stage training pipeline. This dynamic scaling implicitly introduces amplitude diversity into the training data, acting as an implicit regularizer that improves the network's robustness to amplitude variations of surface-related multiples. We further design a composite loss function with homoscedastic uncertainty-based adaptive weighting, which automatically balances the contributions of multiple loss terms without manual tuning. Synthetic and field data examples demonstrate that our method robustly and effectively suppresses surface-related multiples while preserving primary reflections, with migration results confirming improved subsurface imaging quality.
Implicit full waveform inversion (IFWI) introduces implicit neural representations to parameterize the subsurface velocity model as a continuous function of spatial coordinates, which alleviates the dependence on the initial model and improves inversion flexibility. However, IFWI still requires a large number of iterative updates for each new exploration area, leading to slow convergence, high computational cost, and a lack of mechanisms to share prior knowledge across different geological settings, thereby limiting its efficiency and generalization capability. To further accelerate convergence and enhance cross-area generalization, we propose a meta-learning-based implicit full waveform inversion method, referred to as Meta-learning-enhanced implicit full waveform inversion (Meta-IFWI). In this framework, the subsurface velocity model is represented using an implicit neural network with periodic activation functions (SIREN), while a meta-learning strategy is employed to pretrain a single network on multiple velocity inversion tasks. Through this process, the network learns shared inversion priors and rapid adaptation strategies across different geological scenarios. For a new inversion task, the pretrained Meta-IFWI model can be efficiently adapted to the observed seismic data with only a few gradient updates, significantly reducing the number of iterations required for inversion. Numerical experiments conducted on in-distribution models, including layered synthetic models and the Overthrust model, as well as out-of-distribution complex models such as Marmousi 2, demonstrate that, compared with conventional IFWI, the proposed Meta-IFWI achieves improved inversion accuracy while substantially accelerating convergence and reducing computational cost. Moreover, Meta-IFWI exhibits enhanced robustness and stronger cross-area generalization capability.
Geoscientists often solve inverse problems to estimate values of parameters of interest given relevant data sets. Bayesian inference solves these problems by combining probability distributions that describe uncertainties in both observations and unknown parameters, and we require that the solution provides unbiased uncertainty estimates in order to inform risk-based decisions. It has been known for over a century that employing different, but equivalent parametrisations of the same information can yield conditional probabilities that are mathematically inconsistent, a property referred to as the BK-inconsistency. Recently this inconsistency was shown to invalidate the solutions to physical problems found using several well-established methods of Bayesian inference. In this study, we explore the extent to which this inconsistency affects solutions to common geophysical problems. We demonstrate that changes in parametrisations result in inconsistent conditional probability densities, even though they represent exactly the same information. We show that this can affect Bayesian posterior solutions dramatically across various geoscientific problems using real and synthetic data. Given that deterministic inversion is often equivalent to finding the maximum a posteriori solution to specific Bayesian problems (the mathematical equations to be solved are identical), the BK-inconsistency also results in inconsistent solutions to deterministic inverse problems. Indeed, we show that solutions can potentially be designed, simply by changing the parametrisation. This study highlights that a careful rethinking of Bayesian inference and deterministic inversion may be required in physical problems: the effects that we demonstrate are likely to affect past and present inverse problem solutions in a variety of different fields of application.
Campi Flegrei, a large caldera in southern Italy, is among the most hazardous volcanic systems on Earth, directly threatening over one million people. Since 2005, it has entered a phase of accelerating uplift accompanied by intensified seismicity, raising the key question of whether this evolution will culminate in eruption, a bradyseismic peak, or another regime change. Here, we show that the acceleration of seismicity and geodetic deformation is better described by a regularised finite-time singularity than by exponential growth, implying not just a better empirical representation but a different underlying process with potentially dire consequences for the system's subsequent evolution. Independent analyses converge on a critical time $t_c \approx 2030-2034$, with uplift projected to reach about 4 metres by the early 2030s. Geochemical and statistical evidence indicates that deep magmatic volatile input drives this evolution by progressively pressurising the crust. Although no evidence of imminent eruption is found, the system appears to be approaching a critical mechanical threshold whose outcome remains uncertain, requiring sustained high-resolution monitoring and continuously updated forecasts.
It separates this from a younger Wilcannia Conductor tied to later magmatism and proposes these structures guide alkaline magmas and mineral
abstractclick to expand
We have used new magnetotelluric data collected in the Curnamona Province and the adjacent part of the Delamerian Orogen margin to image electrical conductivity structures and to inform the understanding of the crustal architecture within the regional geological context. The preferred 3D resistivity model confirms, and resolves in greater detail, crustal-scale conductive features that have been mapped by the long-period data collected at half-degree spacing as part of the Australian Lithospheric Architecture Magnetotelluric Project (AusLAMP), that is, the prominent Curnamona Province Conductor and the two Nackara Arc conductors. The new model reveals that the eastern Nackara Arc (ENAC) conductor continues as the Broken Hill Conductor (BHC) into the Curnamona Province. Regional geological considerations suggest that its formation is possibly linked to rifting/extension in the early Cambrian. Although we recognise that the east-west trending Wilcannia Conductor could be a possible continuation of the ENAC-BHC zone, integration with recently acquired deep seismic reflection data and evaluation of the geological setting lead us to suggest that they are not genetically linked. We suggest that the Wilcannia Conductor is younger and most likely is related to late Delamerian (~500 Ma) or Siluro-Devonian magmatism. Finally, these conductivity anomalies may represent large-scale trans-crustal structures that control the emplacement of low volume alkaline ultramafic magmas, and show a spatial relationship with certain mineral deposit types, suggesting a possible control on the distribution and formation of metallogenic provinces/belts in the region. This will be further investigated in future work.
Multi-scale analysis of magnitudes and locations supplies a model that assigns likelihoods to events of different sizes.
abstractclick to expand
Natural disaster strikes at any given moment from seemingly out of nowhere Akin to earthquake that strongly affects human with different magnitudes through the course of time. The main aim of this study is the fractal analysis of seismic activity data of India in the interval from 04-10-2016 to 31-05-2023. This includes analyzing the earthquake magnitudes and their epicenters using fractal statistics, which were studied at different scales to identify patterns in the data through the use of the fractal spectrum. The probabilities of future earthquakes with different magnitudes were estimated using the fractal model.
Soil salinity is a major environmental challenge in coastal Bangladesh, threatening agricultural productivity and local livelihoods. This study develops a machine-learning-based framework to predict and map soil salinity in Satkhira district by integrating field observations with Landsat-derived spectral indices. A total of 205 soil samples collected during 2024-2025 were used to train an Extreme Gradient Boosting (XGBoost) model, and predictions were further improved using a Generalized Additive Model (GAM). Spatial cross-validation was applied to reduce autocorrelation bias, and bootstrap resampling was used to quantify prediction uncertainty. The results show strong spatial variability of soil salinity, with higher concentrations in the southern and central coastal regions and lower levels in the northern inland areas. Vegetation indices, particularly NDVI, along with salinity-related spectral indicators, were identified as key predictors. 10-year-window peak-exposure maps generated for 2014-2023 reveal recurrent high-salinity zones and a persistent, expanding footprint of moderate-to-high salinity exposure across the central parts of the district. Uncertainty analysis indicates higher variability in coastal zones and improved prediction stability when multi-year datasets are combined. The proposed framework provides a robust and scalable approach for long-term monitoring of soil salinity. It supports climate-resilient agriculture, land-use planning, and evidence-based decision-making in coastal Bangladesh.
Large earthquakes can trigger translational oscillations of Earth's inner core (Slichter modes), yet their damping remains uncertain. Using simulations, we quantify viscous and Ohmic dissipation in the fluid outer core. Earth's rotation splits the motion into one polar and two equatorial modes. We explore all three and derive scaling laws for the quality factor with each dissipation mechanism. Viscous effects are negligible, confined to a thin layer at the inner core boundary. Ohmic dissipation dominates, with decay times of 3-16 years. Equatorial modes damp at least twice as fast as the polar mode. Our results suggest that Slichter modes can persist for years. Their continued non-detection is therefore more likely due to weak excitation than rapid damping.
The localization of slow and fast slip in fault gouges may play a crucial role in understanding the mechanics of earthquakes and slow slip events. Here, we investigate the fracture energy accompanying this localization and the subsequent thermal weakening. We develop an analytical framework, complemented by numerical simulations, for a gouge governed by rate-and-state-dependent friction with flash-heating at high strain rate and thermal pressurization of pore fluids. The model captures the transition from initially distributed shearing to a co-seismic principal slip ``surface'' at slip $\delta_{\mathrm{loc}} \approx \gamma_c h$, and yields a decomposition of the fracture energy, $G = G_\mathrm{loc}(h) + \Delta G(\delta)$. The minimum, localization-related component $G_\mathrm{loc}$ scales with gouge thickness $h$, which in turn scales linearly with fault size. Flash heating is activated only upon localization for fast earthquake slip, producing an abrupt strength drop, and contributing to the magnitude of $G_\mathrm{loc}$. The post-localization term $\Delta G$ increases with co-seismic slip due to efficient thermal pressurization and is insensitive to $h$. Localization is predicted to occur for both rate-weakening and rate-strengthening gouges because transient state evolution drives apparent weakening after a slip-rate increase. These results unify field, laboratory, and seismological observations of shear band thickness, critical slip, and fracture-energy scaling, and they clarify why small events can be governed by scale-dependent $G_\mathrm{loc}$ whereas large ruptures become increasingly fault-invariant as $\Delta G$ dominates. Our framework provides testable predictions for the relation of gouge thickness to lower bounds of co-seismic fracture energy, and the mechanics of slow-slip transients and fast earthquakes.
It has been 90 years since the discovery of geomagnetic pulsations in the Pc1 range (0.2-5 Hz), widely known as pearls. In the second half of the last century, the concept of pearls as multiple echoes of a wave packet that propagates along the lines of the geomagnetic field, periodically reflecting off the ionosphere at magnetically conjugate points emerged. This paper proposes an alternative interpretation of the pearls. It is assumed that high above the Earth in the narrow equatorial zone of the outer radiation belt there is a pulsed generator of ion-cyclotron waves. The generator excites a discrete sequence of wave packets, which are recorded in the magnetosphere and on the Earth's surface as a series of pearls. The generator is a Q-modulated ion cyclotron resonator with active filling. The presence of opacity domains adjacent to the resonator's end faces is reminiscent of the opacity layer in the atmosphere of a Cepheid. This association was strengthened by the fact that in both cases the formation of opaque layers is associated with the presence in the medium of ions with different charge-to-mass ratios. Based on this association, the idea of a ponderomotive valve arose, periodically changing the width of the opacity domains, thereby forming a periodic sequence of pearls. The ponderomotive valve in pearl theory is analogous to the Eddington valve in Cepheid theory.
Ocean wave models are critical for weather and climate forecasting, and accurate in-situ wave observations are essential for validating and improving these models. Open-source, community-driven buoys have democratized wave observations via telemetry in recent years, but these systems transmit only limited amounts of data. Full high-frequency time series, required to study detailed wave physics, can still in most cases only be collected in situ using data loggers. Yet open-source, low-cost logger solutions remain scarce compared to their telemetry-enabled counterparts. Here we present the Openlogartemis Wave Logger (OWL-v2026), an open-source, low-cost, easy-to-build, high-performance logger for wave data measurements. The OWL-v2026 is built from off-the-shelf components from the maker community, requiring only through-hole soldering for assembly, and totals approximately 220USD per unit. Custom firmware enables high-frequency, low-jitter logging of six-axis inertial measurement unit (IMU) data at 208 or 416Hz, and GNSS position and Doppler velocity at 10Hz, with Pulse Per Second (PPS) synchronization for accurate absolute UTC timestamping. We have successfully validated continuous logging over more than 10 days at 208Hz, a power consumption of approximately 80mA (approximately 20 days of autonomy with three D-cell lithium batteries), and absolute UTC timestamp accuracy typically better than 10ms. Though the OWL-v2026 is a purely technical contribution, it has the potential to substantially expand the availability and affordability of high-frequency in-situ wave time series, similar to how the OpenMetBuoy (OMB) (Rabault 2022) expanded the availability of telemetry-enabled wave observations and helped spark new developments in low-cost open-source buoys.
Sensitivity to model choice, data and proxies makes mid-21st century predictions unreliable.
abstractclick to expand
Ditlevsen and Ditlevsen [Nature Communications, 2023] (DD23 hereafter) propose a statistical framework to estimate the timing of a potential collapse of the Atlantic Meridional Overturning Circulation (AMOC) based on extrapolating information from observed sea-surface temperature (SST) variability. By fitting a stochastic one-dimensional fold-bifurcation model to an SST-based fingerprint of the AMOC using Maximum Likelihood Estimation (MLE), they conclude that a collapse is most likely to occur in the middle of the 21st century, with a reported 95% confidence interval covering the time span from 2037 to 2109. Given the profound implications of such a claim for both climate and society, it is essential to thoroughly test the robustness of this result, to critically assess the underlying assumptions and uncertainties, and to estimate the extent to which the reported confidence interval reflects the true limits of current knowledge. Here we examine the sensitivity of DD23's results and argue that four types of uncertainty are insufficiently explored in their analysis: (i) structural uncertainty associated with the assumed low-order bifurcation model, (ii) statistical uncertainty in their model fit, (iii) uncertainty in the representativeness of SST-based fingerprints as proxies for the high-dimensional AMOC dynamics, and (iv) uncertainty in the underlying data, arising from non-stationary observational coverage and dataset preprocessing. Using synthetic experiments and a systematic analysis of alternative fingerprints and observational products, we show that the tipping times estimated by DD23 are highly sensitive to the uncertainties listed above, and extend several millennia into the future when these uncertainties are thoroughly propagated.
For operational regional synthetic aperture radar (SAR) reconnaissance, mission success depends not only on geometric visibility but also on whether geometric feasibility, prescribed imaging quality, and timely data delivery can be met together within the planning horizon. This paper develops an effective window framework for regional SAR window generation, per window signal level quality screening, and hybrid direct-relay closed loop scheduling. Through coarse angular bandpass screening, a planar characteristic curve containment test, and one dimensional boundary bisection, the framework forms geometry feasible candidate observation windows with millisecond-level accuracy for their entry and exit times. Each candidate window is then assessed in stripmap mode with a companion point target under a unified echo generation and Back Projection (BP) imaging workflow; only windows whose range and azimuth impulse response width (IRW), peak sidelobe ratio (PSLR), and integrated sidelobe ratio (ISLR) all satisfy the preset thresholds are retained. The retained observation, relay, and downlink windows feed a quality constrained hybrid direct-relay closed-loop mixed-integer linear programming (MILP) formulation for joint scheduling of observation and ground return. Numerical experiments confirm millisecond-level agreement with STK reference timing for window boundaries. Every candidate window is screened against preset imaging quality thresholds. Hybrid closed-loop scheduling improves closure performance and ground returned data volume relative to a relay-only baseline
We present a fully coupled boundary integral formulation for modeling steadily propagating semi-infinite plane strain fractures in poroelastic media. By combining fundamental solutions of plain strain poroelasticity for instantaneous fluid source and edge dislocations (normal and slip modes) with temporal and spatial superposition principles, we derive boundary integral equations governing the tractions (normal and shear stresses) and pore fluid pressure on the fracture surfaces. Assuming prescribed tractions and pore fluid pressure profiles, we develop a numerical methodology to solve the governing equations for fracture opening, slip, and cumulative fluid exchange rate. The formulation is systematically verified on several relevant problems, including the case of a tensile fracture with exponential normal loading, a stress-free tensile fracture with an imposed exponential pore fluid pressure, and a shear fracture under uniform shear loading over a finite region, demonstrating excellent agreement with analytical solutions. The framework provides a robust tool for analyzing coupled fracture-fluid interactions in permeable poroelastic media and can be adapted to broader classes of elasto-diffusive problems by modifying the underlying physical parameters.
Drag is one of the most important energy dissipation mechanisms in nature, including landslides and debris flows. To satisfactorily reproduce laboratory or field data in simulating landslides, often empirical relations or convenient numerical values are used for the drag force coefficient. However, this is just a parameter calibration rather than a physical reality. Why should the drag coefficient be a constant for a dynamically evolving landslide? Which drag coefficient represents the physical reality? So, what exactly is the drag remains an open question. As the landslide is a deformable body, the drag-deformation-flow must be interconnected. Empirical drag coefficients lack important dynamical aspects. As the drag coefficient is less likely to be measurable, it must be described with some mechanical models. Yet, there exists no analytical model for the drag coefficient. Here, we postulate that the drag coefficient must be a function of the evolving landslide velocity, as it must contain information constituting the landslide acceleration in relation to the net driving acceleration. We develop an innovative, evolutionary drag coefficient that adjusts automatically during the landslide motion. The drag coefficient is described by a dimensionless acceleration number as it is regulated by the physics and dynamics of the flow. Formal derivation shows that the drag coefficient is the measure of energy inefficiency. This settles down the deliberation on the drag force in landslide dynamics, reshaping the concept of drag. Simulation results highlight the essence, mechanical strength and functionality of the proposed analytical drag as it demonstrates the inherent frictional behaviour of granular debris flows. As the dynamical drag coefficients appeared to be around the often calibrated values, the new drag potentially well reproduces natural event dynamics, but now with clear physical basis.
We investigate two hydraulic stimulation stages performed in April 2022 at the Utah FORGE enhanced geothermal system test site using analytical and numerical models for tensile hydraulic fractures and fluid-induced dilatant shear fractures. The two injection stages differ primarily by the viscosity of the fracturing fluid. Despite similar injection rate schedules and well-head pressure responses, the two stages exhibit markedly different post-shut-in microseismic behavior. The cross-linked gel stage shows sustained microseismic activity for several hours after shut-in, whereas the slickwater stage exhibits an immediate decrease. For the cross-linked gel stage, the located microseismic events reveal the development of a planar radial fracture and allow confident retrieval of the fracture extent evolution with time. We demonstrate that this evolution follows the scalings predicted for viscosity-storage-dominated radial hydraulic fracture by analytical models, providing strong evidence for the development of a planar tensile hydraulic fracture. We further show that leak-off is required to reproduce the fracture extent. In contrast, the immediate arrest observed during the slick-water stage suggests either a transition to a toughness- or leak-off-dominated hydraulic fracture regime, or the development of a fluid-induced shear fracture. We show that the slickwater stage could plausibly correspond to a dilatant shear fracture, provided sufficient dilatancy, whereas this hypothesis is invalidated for the cross-linked gel stage. We confirm these insights using a 3D axisymmetric fully-coupled hydro-mechanical numerical model capable of resolving both tensile and shear failure modes, and including leak-off. Finally, we propagate uncertainties in the in-situ stress state and natural fracture orientations through this numerical model to assess their impact on injection pressures.
Seismic stratigraphic interpretation of shelf-edge clinothems is essential for revealing tectonic evolution, paleoclimate change, depositional dynamic conditions, and hydrocarbon generation and accumulation during basin filling. However, traditional interpretation methods remain labor-intensive, time-consuming, and highly subjective. Although AI-based method offer a potential solution for automated this task, its development has been limited by the scarcity of comprehensive and representative benchmark datasets for shelf-edge clinothems. This limitation primarily arises from limited field data availability, the scarcity of reliable geological labels, and the structural complexity and strong variability of clinothem-dominated systems. To address this gap, we develop a hybrid benchmark dataset through two complementary strategies of field data curation and geological and geophysical forward modeling, ultimately generating 3,000 unlabeled field and 4,000 labeled synthetic seismic data, respectively. We further evaluate several representative baseline deep learning models on these datasets, and the accurate results demonstrate that the curated dataset provides an effective and representative basis for model training, quantitative assessment, and practical application. Finally, we have publicly released this hybrid benchmark dataset (https://doi.org/10.5281/zenodo.18910271) to facilitate the development, validation, and assessment of deep learning methods for automated seismic stratigraphic interpretation.
Subsurface properties are essential for hazard assessment, energy and environmental management, and infrastructure resilience, but direct observations are sparse and uneven, motivating the use of surface observations as indirect constraints. Here we explore whether AlphaEarth embeddings can be applied to subsurface estimation despite indirect and non-unique physical links between surface and depth. We test this idea in two conterminous U.S. applications: shallow seismic site characterization using $V_S 30$ with embedding features alone and with conventional covariates (topographic slope and a tectonic-status indicator), and subsurface temperature reconstruction using embedding-based nonlinear regression. Across both applications, embedding-informed models recover spatially coherent, physically plausible patterns and outperform simpler baselines. The comparison also highlights a key difference: domain covariates materially stabilize $V_S 30$ regression, whereas temperature mapping relies primarily on embedding features. Overall, the results support the feasibility of foundation-model surface representations for regional surface-to-subsurface inference, while emphasizing the need for robust spatial validation under heterogeneous labels and uneven data coverage.
Distributed Acoustic Sensing (DAS) has emerged as a promising tool for environmental and cryoseismological studies, yet its performance under the extreme conditions of the High Arctic remains poorly documented. Here we report on a multi-season DAS experiment conducted across tundra and glacier environments in Hornsund, Svalbard, using 9\,km of fiber-optic cable. The study combines a description of the deployment strategy, instrumentation, and operational constraints with an exploratory analysis of the recorded data to assess the types of cryospheric processes that can be captured with DAS. We document logistical, environmental, and technical challenges and provides guidelines for future experiments, including issues related to coupling, noise sources, cable integrity, and seasonal accessibility. Furthermore, we demonstrate how the dataset can be used for detecting permafrost freezing using noise interferometry, locating icequakes and calving events, as well as monitoring runoff from river-induced seismic noise. The experiment provides a field-based reference for the design and interpretation of future DAS studies in Arctic environments and highlights considerations relevant for long-term cryoseismological monitoring.
The increasing demand for deep learning in seismic interpretation has highlighted significant challenges, particularly the reliance on massive, labeled datasets and the inefficiency of training isolated models for individual tasks. To address these limitations, we introduce a unified, prompt-guided flow-matching framework (SeisDiff-intp) capable of executing multiple seismic interpretation tasks within a single model. By conditioning on varying prompts, the model dynamically switches between interpretation objectives without requiring structural modifications. Furthermore, to overcome the scarcity of labeled data for complex subsurface features, we propose an integrated generative augmentation strategy. By employing the flow matching setting, the framework can synthesize diverse and geologically realistic training pairs, specifically targeting structurally complex. Experimental results demonstrate that the proposed approach, coupled with generative augmentation, delivers high-quality, task-specific interpretations with stable and reproducible inference behavior. Ultimately, this approach provides a scalable, flexible, and robust alternative to single-task deep learning based seismic interpretation models.
CPU-GPU redesign of direct tomography handles 229-station datasets in a fraction of the original time with nearly identical results.
abstractclick to expand
Surface wave tomography is essential for investigating the shear-wave velocity structure of the crust and upper mantle. The direct surface wave tomography method, DSurfTomo, has become one of the most widely adopted packages due to its ability to account for ray path bending in complex media to increase subsurface characterization accuracy. However, its inherent serial architecture lacks effective support for multicore CPUs and GPUs. Furthermore, its built-in solver is computationally expensive when solving large-scale linear systems. Consequently, the software struggles to meet current demands for large-scale, high-resolution surface wave tomography. To address these limitations, we propose pDSurfTomo, a highly optimized package utilizing hybrid CPU-GPU acceleration. First, it overcomes the scalability bottleneck in sensitivity kernel computation through a refined parallel design; also, it uses vectorization techniques to accelerate the modeling of surface wave dispersion, achieving efficient computation of the sensitivity kernel. Second, it implements parallelization of the serial fast marching method using OpenMP, significantly reducing computation time for surface wave traveltimes. Finally, it incorporates GPU acceleration to efficiently solve large-scale sparse linear least-squares problems. To streamline the workflow, we provide a cross-platform GUI with remote server connectivity, allowing users to execute and visualize inversion tasks locally while seamlessly utilizing remote computing clusters. Application to an observed dispersion dataset from 229 stations in North China demonstrates that pDSurfTomo reduces computation time by more than an order of magnitude while maintaining a negligible discrepancy compared to the original DSurfTomo. It is expected that pDSurfTomo will provide a highly efficient and accessible solution for large-scale, high-resolution surface wave tomography.
Observations show the multidimensional dynamics of meltwater and distribution of ice layers in the firn on the Greenland Ice Sheet. However, state-of-the-art large-scale models for firn hydrology are essentially one-dimensional, limiting their ability to explain observed datasets and contributing to uncertainty in surface mass balance and sea-level rise estimates. Here, we present a large-scale, multidimensional, multiphase, and thermomechanical model for the subsurface hydrology of firn. The model is highly efficient due to a novel algorithm in which an extra equation for pressure is solved only in saturated regions. Furthermore, the model can apply spatially heterogeneous boundary conditions to the unsaturated-saturated domain and allows for the dynamic formation of fully impermeable ice layers. The numerical results show excellent comparisons against analytic solutions to one- and two-dimensional problems that involve coupled unsaturated-saturated flows, thermodynamics, and phase change. We further apply the model to investigate field data from southwest Greenland and find that lateral heterogeneities strongly influence the depth of melt percolation and ice layer formation. Improved understanding of these local, multidimensional processes will provide physics-based constraints on firn densification, reduce uncertainty in converting altimetric elevation change to mass change, and improve estimates of freshwater fluxes to the ocean under a warming climate.
Seismic noise with an amplitude higher than that of the sought signal is a challenge for detection. Several techniques have been developed to suppress the ambient noise and to reduce the detection threshold in order to find signals with the lowest possible amplitudes produced by events with the magnitudes significant for scientific research and technical applications. Seismic arrays were introduced in the late 1950s as a method for improving underground test monitoring, potentially reducing detection thresholds by fivefold or more by exploiting destructive interference effects of a quasi-random noise. The beamforming method is the backbone of data processing at the International Data Centre (IDC) with more than 30 array stations of the International Monitoring System (IMS) installed around the globe. The matched filter method allows for the suppression of noise incoherent to the sought signal. It employs waveform cross-correlation (WCC) with templates based on actual and simulated seismic signals to improve the signal-to-noise ratio estimates for similar signals. The performance of this method is significantly enhanced when it is applied to a seismic array. A novel technique combined with WCC, is the noise stochastization or the addition of scaled random noise to the actual data before calculating the cross-correlation coefficient. The stochastic component can easily be generated by a computer program. Alternatively, a regular signal propagating at an angle of around 90{\deg} to the plane of the sought signal can play a role of stochastic component at array stations. We demonstrate the separate and joint effects of these noise reduction techniques on the WCC performance, when applied to filtered data from selected IMS arrays and various waveform templates of historical events available at the IDC.
We present a scalable method for geolocalizing buried fiber-optic cables using Distributed Acoustic Sensing (DAS) and traffic-induced quasi-static seismic signals. Assuming access to one end of the fiber, the method fuses DAS measurements with vehicle trajectories obtained from either video tracking or vehicle-mounted GPS. The fiber geometry is estimated by minimizing the mismatch between the measured and physics-based synthetic strain-rate maps. The framework combines a matched-filter initialization with neural-network-based trajectory optimization, enabling robust convergence under realistic noise and trajectory-uncertainty conditions. Simulation and field experiments demonstrate sub-meter localization accuracy, often on the order of tens of centimeters, and strong agreement with manual calibration by tap-testing. This approach provides a practical tool for mapping poorly documented underground fiber infrastructure and for supporting urban sensing applications.
Moment versus active-area trajectories of swarms and induced sequences collapse onto the slow-earthquake relation under a diffusive model.
abstractclick to expand
The final size of an earthquake typically cannot be predicted from its ongoing seismic radiation. Expanding observations reveal distinct exceptions, such as slow earthquakes, injection-induced seismicity, and earthquake swarms, where fault slip has an upper bound. A common thread among these anomalies is the diffusive migration of their active areas. Here, we report a unified scaling relation for these diffusional earthquakes. By tracking prolonged earthquake swarms in Northeast Japan, we constrained the time evolution of their active seismicity areas and cumulative seismic moments. Their moment-duration trajectories coincide with the final states documented for global swarms and induced seismicity across various scales. When plotted as seismic moment versus seismicity area, the trajectories of swarms and injection-induced seismicity collapse onto those of slow earthquakes, uniformly explained by a diffusional constant-slip model. The constant-slip scaling of diffusional earthquakes and the constant-stress-drop scaling of ordinary earthquakes mark a bimodal predictability in seismogenesis.
Wavelet phase is a critical parameter in seismic processing, where zero-phase wavelets are essential for maximizing temporal resolution and ensuring accurate interpretation of subsurface structures. In practice, however, the seismic wavelet is often nonstationary, exhibiting a phase that varies in space and time due to physical factors such as attenuation, dispersion, and thin-bed tuning effects. Higher-order statistical measures-specifically kurtosis and skewness-are traditionally maximized to drive the signal toward a maximally non-Gaussian or maximally asymmetric zero-phase state. This paper addresses the computational and stability challenges inherent in nonstationary estimation by casting the problem as a regularized non-convex optimization task. We propose a robust framework based on the Alternating Direction Method of Multipliers (ADMM) that eliminates the instability and artifacts associated with traditional piecewise-stationary windowed approaches. The core of our contribution is the derivation of the first closed-form proximity operators for the scale-invariant inverse kurtosis and inverse skewness functionals. By exploiting the signed permutation invariance of these statistical measures, we reduce the high-dimensional proximal subproblems to efficient one-dimensional root-finding tasks. We provide a detailed geometric interpretation of the optimality conditions, demonstrating that the global minimizer is governed by a branch-separation property. Furthermore, we derive an explicit critical threshold parameter which provides a theoretical rule for identifying the global minimum among multiple stationary points. Numerical validations on synthetic and real seismic data demonstrate that the proposed proximal algorithms achieve linear computational complexity and superior stability compared to traditional methods, effectively enabling nonstationary phase correction.
This work extends the VirtualQuake earthquake simulation framework to incorporate the effects of fluid injection on fault stability and induced seismicity. Reworking VirtualQuake into a system using stress point sources, instead of rectangular segments, the new model offers increased geometric flexibility, greater stability, and the re-addition of cross-fault interactions. This approach is paired with the addition of fluid injection modeling, through the distribution of inflationary stress sources, according to invasion percolation, simulating both the stress effect of the injection on nearby faults, and deformation from the injection itself.
The model captures both immediate and long-term impacts of injection cycles, including hydraulic fracturing processes and post-injection pressure dissipation. Results show that while single injections produce limited stress changes, repeated injections generate persistent high-pressure regions that progressively destabilize nearby faults, increasing the likelihood of seismic events. This model evolution offers a tool for the evaluation and characterization of long term risks from commercial injection.
This paper focuses on the problem of anticipating the local occurrence of future large earthquakes. "Local" is defined as the probability of a large earthquake occurring with a defined circle of arbitrary radius surrounding a point of interest. The main (and for that matter, the only) assumption for all these works is that the Gutenberg-Richter (GR) magnitude-frequency relation holds. Here we describe a method for computing calendar time forecasts in a local area for large earthquakes of a target magnitude MT using a count small earthquakes MS < MT in the area. Using the idea that the GR relation is valid throughout the surrounding region, we define an ensemble of earthquakes in larger surrounding regions to be used in computing the forecast. What follows is simple data mining. The method has significant skill, as defined by the Receiver Operating Characteristic (ROC) test, which improves as time since the last major earthquake increases. The probability is conditioned on the number of small earthquakes n(t) that have occurred since the last large earthquake. The probability is computed directly as the Positive Predictive Value (PPV) associated with the ROC curve. The method is validated by comparison to the UCERF3 forecasts for the UCERF3-defined geographic boxes centered on Los Angeles and San Francisco. The method is then applied to a 125-KM radius circular area around Los Angeles, California, following the January 17, 1994 magnitude M6.7 Northridge earthquake, and short term forecasts (1 year and 5 year ) are computed.
Parametric study at Delaney Park finds angles up to 15 degrees help little while larger ones move the fundamental frequency higher unlike in
abstractclick to expand
Even when large-scale, site-specific three-dimensional (3D) subsurface models are used to represent spatial variability, multi-dimensional ground response analyses (GRAs) at downhole array sites continue to exhibit amplitude discrepancies between simulated theoretical transfer functions (TTFs) and recorded empirical transfer functions (ETFs), with ETFs at the Delaney Park Downhole Array (DPDA) showing notably lower amplitudes at the fundamental frequency (f0). This discrepancy suggests greater apparent attenuation from wave scattering and destructive interference than is currently captured in multi-dimensional GRAs. However, most prior studies assume vertically propagating shear-wave input, neglecting inclined and azimuthally varying wavefields. This study evaluates the effects of inclination and azimuth in 2D and 3D GRAs at DPDA to assess whether non-vertical wave incidence improves agreement with observed ETFs. Two approaches for modeling inclined waves, the Input Lag Method (ILM) and the Inclined Domain Method (IDM), are compared, with ILM found to be more effective and computationally efficient for large-scale models. A parametric study using ILM shows that inclination angles up to 15{\deg} produce only minor reductions in TTF amplitudes near f0, with limited improvement in ETF agreement. Larger inclination angles reduce amplitudes but introduce systematic shifts in f0 to higher frequencies that are not observed in the ETFs. Azimuthal variation in 3D GRAs has a relatively minor effect, primarily influencing trough amplitudes while leaving f0 and higher-mode peaks largely unchanged.
Simulations show phase change and geometric tilt reproduce observed uplift variations and predict nonlinear acceleration.
abstractclick to expand
Bradyseism at Campi Flegrei is usually interpreted in terms of hydrothermal pressurization and magmatic degassing. Fluid flow, often treated as a passive response to pressure accumulation, is commonly modeled using simplified geometries and homogeneous permeability fields. We introduce a model in which phase transition, structural heterogeneity and geometric asymmetry jointly influence fluid flow and pressure distribution within a heterogeneous subsurface environment. We hypothesize that coupling among phase change, density gradients and flows may follow a mechanism similar to the self-propulsion observed in asymmetric floating bodies like melting ice blocks, where phase change generates buoyancy-driven currents along their inclined surfaces and net motion in the opposite direction. We simulate pressure evolution in a shallow gas-rich reservoir subject to time-dependent forcing and hydraulic relaxation, coupled to buoyancy-enhanced Darcy flow along prescribed preferential pathways. Our numerical simulations, grounded in reported deformation rates and seismicity depths at Campi Flegrei, reproduce temporal variations in uplift and the persistence of spatially localized flow. Within this framework, asymmetric geometry may promote channelized upward transport, while phase change may enhance buoyancy and contribute to pressure redistribution. Our model predicts nonlinear uplift acceleration, shallow localized seismicity and velocity scaling with pressure and buoyancy. Integration with existing multiphase models would enable the examination of how buoyancy-driven flows influence pressure evolution and deformation during volcanic unrest.
Shear-wave leakage in the vertical (Z) component of ocean-bottom cable (OBC) seismic data commonly results from the receiver tilt and poor seafloor coupling, introducing unwanted coherent noise that impacts the subsequent data processing and imaging. Traditional denoising methods are limited by manual parameter tuning and idealized model assumptions, while deep-learning (DL) approaches have shown significant potential in suppressing shear-wave leakage. However, supervised learning requires clean primary waves (P waves) as the label, which is generally impractical to obtain for field data. To address these challenges, we propose a framework based on horizontal-component priors for adaptive shear-wave leakage suppression (HPAS). Instead of relying on clean primary-wave (P-wave) data, HPAS generates input-label pairs directly from raw multi-component field data using an additive-subtractive noise strategy. Specifically, we extract shear-wave (S-wave) noise from the horizontal components and apply a linear transformation to match its first and second order moments with the S-wave leakage in the Z-component, and the statistically matched noise is then added to and subtracted from the original Z-component to create the input and label pairs. By allowing the denoising model to learn the S-wave features present in the differences between the input and the label, the adaptive denoising process approximates supervised learning. Evaluations on both synthetic and field data demonstrate that the proposed HPAS framework effectively and adaptively suppresses S-wave leakage while preserving the amplitude of the P-wave signals in the Z-component, offering a robust solution with strong generalization capabilities.
The weathering of iron-rich phases within meteorites is a process that significantly alters the microstructure and chemical composition based on the environmental conditions at the location of landing and exposure time since fall. This work investigates the resulting phases in a correlative and comparative manner using a Nantan meteorite fragment. Techniques including X-ray Photoelectron Spectroscopy, Energy Dispersive X-ray Spectroscopy, and X-ray Fluorescence Spectroscopy were used for compositional determination and X-ray Diffraction and Electron Backscatter Diffraction for phase determination and microstructural analysis.
These techniques revealed the meteorite matrix to be predominantly composed of magnetite, with distinct regions of high Ni content. The grain size was found to be approximately 5 $\mu$m in $\geq$ 2.6 at$\%$ Ni content regions with a visible boundary of 100-200 $\mu$m extending into $\leq$ 0.9 at$\%$ Ni regions, wherein the grain size averaged 10s of $\mu$m.
Additionally, a brecciated cohenite phase was found with a vein-line structure, composed of NiO, magnetite, and deposits of iron and nickel carbonates. This indicates that the matrix regions formed through the weathering of discrete primary phases, with the high Ni regions forming from aqueous alteration of kamacite and the low Ni regions forming from direct dissolution and oxidation of the source Fe-Ni metal.
Thermal conductivity of Earths lower mantle controls heat transfer across the core-mantle boundary (CMB) and strongly influences mantle convection. We report direct measurements of the thermal conductivity of single-crystal ferropericlase (Mg$_{1-x}$Fe$_x$O, $x = 0.09$-0.13), the second most abundant lower-mantle mineral, using optical laser flash and X-ray free-electron laser heating in diamond-anvil cells up to $\sim2200$~K and 130~GPa. These experiments provide the first conductivity data for ferropericlase at simultaneous lower-mantle pressures and temperatures. A marked reduction in conductivity between 60 and 100~GPa at $\sim1700$~K is consistent with the iron spin crossover. Combined with our previous results for Fe- and Fe,Al-bearing bridgmanite, the data define a lower-mantle conductivity profile that increases with pressure to $\sim10$~W\,m$^{-1}$\,K$^{-1}$ near the CMB, constraining mantle heat flux, plume buoyancy, and long-term geodynamic evolution.