Cosmic bulk flow--the volume-averaged peculiar velocity of matter--serves as a fundamental test of the Cosmological Principle when probed on gigaparsec (Gpc) scales. Historically, however, measurements of cosmic bulk flow have been limited to $R\lesssim 100\ h^{-1}{\rm Mpc}$. We present an application of kinetic Sunyaev-Zel'dovich (kSZ) velocity reconstruction to constrain the bulk flow on cosmological scales, over a volume of effective radius $R\sim2000\ h^{-1} {\rm Mpc}$. We use the WISE$\times$SuperCOSMOS and unWISE galaxy catalogs, combined with CMB temperature maps from Planck to reconstruct large-scale velocities in six tomographic bins spanning $0.1\lesssim z \lesssim 1.5$. We place some of the tightest upper limits to date on bulk velocity at $200 \lesssim R\,[h^{-1}{\rm Mpc}]\lesssim 2000$, finding results fully consistent with the $\Lambda$CDM bulk flow expectation. Our unWISE constraints are in strong tension with the CatWISE quasar number-count dipole measurement if that dipole is due to a coherent bulk flow $\sim 370\ {\rm km\,s^{-1}}$ at $R\sim1000\ h^{-1}{\rm Mpc}$. We also derive constraints on the matter power spectrum at low-$k$ ($k\lesssim10^{-3}\, {\rm Mpc}^{-1}$) with low-$z$ ($z\sim 1$) galaxy samples. Alongside these cosmological constraints, we introduce a novel approach to map the optical depth bias--an inherent astrophysical degeneracy in kSZ velocity reconstruction--across different data combinations. Our work bridges the theoretical gap between bulk flow and kSZ-reconstructed velocities, and expands the horizon of bulk velocity measurements out to Gpc scales.
Cosmic voids provide a low-density environment where the scalar fifth force predicted by $\fR$ modified gravity (MG) is least screened. We present a semi-analytical calculation of the monopole, dipole, and quadrupole of the void-galaxy cross-correlation function $\xi^{s}(s,\mu)$ in redshift space for the Hu-Sawicki $\fR$ model ($n=1$), combining the scale-dependent growth factor from the scalaron degree of freedom with nonlinear spherical shell dynamics. The framework applies to any metric $\fR$ theory for which $\Geff(k,a)/G$ can be specified in the quasi-static limit. Our key results are: (1)~the monopole deviation from $\lcdm$ grows from $+2.8\%$ for large voids ($r_v = 30\;\Mpc$) to $+29.7\%$ for small voids ($r_v = 11.7\;\Mpc$) at $\fRz = 10^{-5}$ -- a distinctive size-dependent signature of the Compton-scale scalaron response associated with chameleon screening, with $\lambda_C \approx 8\;\Mpc$; (2)~nonlinear evolution amplifies the modified-gravity signal by $\mathcal{A}_0 \approx 4$, bringing it within reach of ongoing and upcoming wide-field spectroscopic surveys, such as DESI, Subaru PFS, Euclid, and the Roman Space Telescope; (3) the gravitational potential contains a finite-range Yukawa component, producing a radially dependent dipole signature that is complementary to the density and velocity multipoles; (4) the signal weakens with redshift as the scalaron Compton wavelength shrinks, but remains potentially detectable at Stage-IV spectroscopic void samples. We show that the void-scale transition in the modified-gravity response, the joint sensitivity to density, velocity, and fifth-force contributions, and the nonlinear amplification around void shells make redshift-space void-galaxy multipoles a powerful semi-analytical probe of f(R) gravity and related inhomogeneous dark energy scenarios.
We present a non-parametric, model-independent reconstruction of the cosmological background and perturbation dynamics in non-minimally coupled theories of gravity. Within the Effective Field Theory of dark energy framework, we reconstruct the time-dependent cosmological constant, $\Lambda(t)$, and the non-minimal coupling function, $\Omega(t)$, from cosmological data. To ensure stability, we apply a correlated smoothness prior that restricts the reconstruction to the space of sufficiently smooth functions. Using CMB, DESI BAO, Type Ia supernovae, CMB-ISW lensing cross-correlations, and large-scale 3x2pt DES Year 3 data, we find a $2.8\sigma$ hint for a non-minimal coupling. For the dark energy equation of state, our results indicate a preference for the existence of crossing of the phantom divide, $w_{DE}=-1$, at $z<0.8$. The non-minimal coupling effect stabilizes dark energy perturbations, providing a viable physical interpretation of the phantom crossing scenario. Our work paves the way for model-agnostic searches for signatures of modified gravity in cosmological data.
The Vera C. Rubin Observatory's Legacy Survey of Space and Time (LSST) will deliver an unprecedented Type Ia supernova (SN) sample, making photometric calibration systematics a dominant source of uncertainty in dark energy constraints. We perform a comprehensive analysis of calibration systematic effects in LSST, quantifying how uncertainties in the LSST passbands propagate into biases in SN distance moduli and, consequently, the dark energy equation of state parameters. Specifically, we examine how the inferred values and uncertainties of $w_0$ and $w_a$ shift as a function of the amplitude of passband systematics. For linear passband tilts, we find that the best-fit ($w_0$,$w_a$) shifts by $\sim$0.025$\sigma$ and the $w_0-w_a$ contour area increases by $\sim$5% for each 1%/100nm increase in tilt, while for quadratic passband tilts, our results are less conclusive and warrant further exploration. This analysis will help inform the calibration accuracy required for LSST to achieve its goals in constraining dark energy.
The epoch of reionisation is a key phase in cosmic history, but its detailed evolution remains poorly constrained by current cosmic microwave background (CMB) observations. We investigate whether the kinetic Sunyaev--Zel'dovich (kSZ) effect can discriminate among reionisation histories consistent with current large-scale CMB constraints. Using histories derived from Planck data, we compute the corresponding kSZ angular power spectra within an analytical framework, separating late-time and patchy contributions and accounting for uncertainties in both the ionisation history, $x_e(z)$, and astrophysical parameters constrained by the LORELI II simulations. The allowed histories fall into two broad classes, `short' and `long' duration reionisation, yielding distinct kSZ signatures. Uncertainties from $x_e(z)$ and astrophysical parameters produce comparable amounts of dispersion, yet the two classes remain clearly separable, with variations within each class at the $\sim$10\% level. Current kSZ measurements ($\sim$0--3 $\mu$K$^2$) are not yet precise enough to distinguish between these scenarios, although they tend to favor a `short' reionisation. The kSZ effect thus provides a promising probe of reionisation beyond optical depth constraints. In particular, a measurement of the kSZ power spectrum at $\ell \sim 2000$ with $\sim$0.4 $\mu$K$^2$ sensitivity would discriminate between `short' and `long' reionisation scenarios.
We study how constraints on the abundance of ultralight axions (ULAs) from cosmic microwave background (CMB) data depend on their nonlinear modelling. We focus on the axion mass range $10^{-25} \leq m/\rm{eV} \leq 10^{-23}$, where the axion Jeans scale falls in the quasi-linear regime probed by CMB lensing, making constraints highly sensitive to the choice of nonlinear prescription. We show that the inferred constraints depend significantly on the choice of nonlinear model, which must therefore be treated carefully. Performing Markov Chain Monte Carlo (MCMC) analyses with \Planck\, 2018, ACT DR6 and DESI DR2 BAO data, we find naive nonlinear modelling of non-cold matter can produce an artificial preference for a subdominant ULA dark matter component with mass $m \approx 10^{-24}\,$eV. This arises from a lensing-like enhancement of the CMB power spectrum.
Tensions often arise between different datasets in cosmology, and consistency tests can serve as a powerful tool for diagnosing potential issues. The density-shear Baryon Acoustic Oscillations (GI BAO) are the imprint of the BAO feature on the shear field induced by the large-scale tidal field. We highlight that GI BAO can provide a robust consistency check for the density BAO, shear measurement, and alignment model. Failure of this check hints at systematics in any of these parts. As an illustration, we present the first GI BAO measurement on photometric data, using the DES Y3 dataset. We find the GI BAO constraint on the BAO scale dilation parameter $\alpha $ to be $ 0.966 \pm 0.252 $ (1$\sigma$), in good agreement with the density BAO constraint, $ 0.966 \pm 0.037 $, thereby validating the density BAO, shear measurement, and the linear alignment model. Furthermore, we argue that combining the density BAO with the GI BAO yields results that are more resilient to systematic effects. Thanks to the massive data volumes of stage IV surveys, the GI BAO will play an even more prominent role as a consistency check.
Galaxy rotation curves provide a direct test of how baryonic matter and dark matter combine to determine the mass profiles of disk galaxies. In ultralight or fuzzy dark matter models, numerical simulations predict a central solitonic core surrounded by an outer halo, but the population-level relation between the core and the host halo remains an important modelling choice. We present a hierarchical Bayesian pipeline for fitting soliton-plus-NFW rotation-curve models to the SPARC database while treating the core-halo scaling exponent as a global free parameter. The model uses a Schive-normalized soliton, a regularized NFW envelope with a smooth transition, halo-mass priors tied to $V_{\rm flat}$, and stellar-to-halo-mass information. We apply the pipeline to 106 SPARC galaxies, including 26 systems with bulges, and sample the resulting 346-dimensional posterior with JAX/NumPyro NUTS. The free-scaling run has zero divergences and $\hat r \simeq 1.000$ for the global parameters. The posterior reaches the upper edge of the standard mass prior and the lower edge of the scaling prior, with $\log_{10}(m_\phi/{\rm eV})=-19.20^{+0.12}_{-0.11}$ and $\alpha=0.014^{+0.023}_{-0.011}$. This boundary behaviour persists after removing UGC06787 and after widening the high-mass stellar-to-halo-mass prior. Within the adopted Schive-normalized model and standard SPARC fuzzy-dark-matter prior range, the selected SPARC sample does not identify an interior population-level soliton component. The main contribution is the hierarchical inference framework and the diagnostic workflow for recognizing boundary solutions in full-sample rotation-curve analyses.
Gravitational-wave (GW) signals from compact binary coalescences (CBCs) enable independent measurements of the Hubble constant \(H_0\) via the spectral siren method, which critically depends on an accurate model of the source-frame mass distribution. While the primary mass function has been extensively studied, the impact of the secondary mass distribution on cosmological inference has been largely overlooked. Here, we perform a joint inference of population and cosmological parameters using 142 confident CBC detections from GWTC-4.0, adopting a new parametric model that flexibly describes features in both the component-mass spectrum and the pairing function, with particular emphasis on the secondary masses. We find \(H_0 = 71.4^{+13.8}_{-13.4} \;\mathrm{km\,s^{-1}\,Mpc^{-1}}\) (68\% CL) from spectral sirens alone, and \(H_0 = 73.5^{+9.2}_{-7.2} \;\mathrm{km\,s^{-1}\,Mpc^{-1}}\) when combined with the bright siren GW170817. Compared to the standard LVK Fullpop-4.0 analysis, these constraints represent improvements of \(\sim29.8\%\) and \(\sim22.2\%\) in \(H_0\) uncertainty, respectively. The enhanced precision is driven by previously unmodeled features, including peaks near \(18\,M_\odot\) and \(65\,M_\odot\) as well as mass-dependent pairing transitions at \(28\,M_\odot\) and \(52\,M_\odot\). Our results demonstrate that the secondary mass function is also a key ingredient for precision standard siren cosmology.
We present new empirically grounded forecasts for the detectability of the stochastic gravitational-wave background anisotropies assuming a population of stellar-mass compact binary coalescences as its source. We quantified the discovery potential using simulations based on the Euclid Flagship Galaxy Catalogue and LIGO-Virgo-KAGRA observational constraints in combination with detailed theoretical modelling. We considered the multi-messenger cross-correlation with galaxies as well as the gravitational wave-only cross-correlation across observation-time bins. For compact binaries up to redshift $z<3$, we found that an angular resolution of $\theta = 4.1$ deg ($\ell \geq 44$) is required for discovery within five years of observation via cross-correlation with a galaxy catalogue that is complete up to limiting magnitude $i < 24.7$ and has redshift uncertainties $\sigma_z = 0.003 (1+z)$. Extending the time range to ten years alleviates that requirement to $\theta = 6.5$ deg ($\ell \geq 28$). We also showed that binning the galaxies in redshift allows us to reconstruct the evolution of the kernel, which can be used to further constrain compact binary population models. Discovery without a multi-messenger tracer has proven significantly more challenging, requiring exclusion of the loudest events, $\theta = 1.8$ deg ($\ell \geq 95$), and a favourable coalescence rate. In light of the plans being carried out in the community for ongoing and upcoming galaxy surveys, this work bodes well for the multi-messenger discovery and exploration of the stochastic gravitational-wave background in the era of next-generation observatories such as the Einstein Telescope and Cosmic Explorer.
Lensed supernovae (SNae) are among the most eagerly anticipated transients expected from the Legacy Survey of Space and Time (LSST). Quadruply lensed SNae permit more highly constrained models than "mere" doubles. The quadruply lensed SN 2025wny offers multiple lessons on how one might respond to an alert. The full benefits of such rare events are best achieved with immediate spectroscopic and photometric followup, within hours rather than days. This in turn requires on-the-fly modeling to predict the position(s) and magnitudes of trailing images and to "pre-cover" any leading images that might have been too faint to trigger an alert and that cannot be detected in the triggering exposure. This paper sets out a proposed protocol for exploiting similar alerts. A list of quadruply lensed candidate hosts must first be supplied in advance to one or more brokers, along with on-the-fly software (an example of which is given) to determine whether an SN near an incipient host is strongly lensed, and whether quadruply or doubly. The brokers would then broadcast the positions and time delays (or "pre-lays") that permit "pre-covery'' of leading images, "re-covery'' of trailing images, and possibly, extraction of a rough lightcurve from prior LSST exposures. The scheme is illustrated (and some potential problems identified) using preliminary data for SN 2025wny presented by three independent teams. It employs software based on the geometric Witt-Wynne lens model and Falor's exact, forward, differentiable solution thereof.
The cross-correlation between tracers of large-scale structure, such as galaxies or quasars, and the thermal Sunyaev-Zel'dovich (tSZ) signal yields a measure of the bias-weighted mean electron pressure, $\langle b_\mathrm{h} P_\mathrm{e} \rangle$, where $b_\mathrm{h}$ is the halo bias and $P_\mathrm{e}$ is the electron pressure. With a model for the bias, one can derive the thermal history, $\mathrm{d}y/\mathrm{d}z$, where $y$ is the Compton parameter and $z$ is redshift. We explore how these quantities depend on redshift, cosmology, and the physics of galaxy formation using the FLAMINGO suite of cosmological hydrodynamical simulations, which spans a range of cosmological parameters and baryonic feedback implementations in volumes of up to $(2.8\,\text{Gpc})^3$. We find that $\langle b_\mathrm{h} P_\mathrm{e} \rangle$ depends steeply on $S_8 \equiv \sigma_8\sqrt{\Omega_\mathrm{m}/0.3}$, with an effective scaling $\langle b_\mathrm{h} P_\mathrm{e} \rangle \propto S_8^{\epsilon(z)}$, where the exponent $\epsilon(z) \approx 3$ over the redshift range $0.1 \leq z \leq 1$. Compared with existing cross-correlation measurements using tracer samples from SDSS, BOSS, eBOSS, DES, and DESI cross-correlated with tSZ measurements from Planck, we find that models with a low-$S_8$ cosmology and strong feedback are preferred, with a joint fit yielding $S_8 = 0.72^{+0.03}_{-0.03}$ and a normalised group-mass halo baryon fraction $f_b(10^{13}\,M_\odot, z=0.1)/(\Omega_b/\Omega_m) = 0.10^{+0.09}_{-0.05}$ . Contrary to most probes of feedback which sample smaller scales (e.g., X-ray measurements), we show that feedback boosts $\langle b_\mathrm{h} P_\mathrm{e} \rangle$, thus providing a novel test of feedback models. Overall, our results show the thermal history provides a route to jointly constrain cosmological parameters and test models of galaxy formation.
Today, the observable cosmos exhibits a remarkable degree of isotropy and plausibly began in a nearly isotropic initial state. The properties of the Lorentzian Chern-Simons-Kodama (CSK) functional can provide an understanding of this initial state. In gravity with a positive cosmological constant, the Chern-Simons-Kodama (CSK) wavefunctional is an exact, chiral solution of the quantum gravitational constraints. We suggest that the normalizability and other issues with this functional, if interpreted as a proper state of quantum gravity, instead suggest an embedding into a larger quantum gravitational completion, and recast the CSK functional as a gravitational sphaleron with observationally desirable properties. By perturbing around the dominant de Sitter saddle of the wavefunctional with appropriate quantum gravitational boundary conditions, we find that for a closed universe the system is dynamically driven to spatial isotropy, while all anisotropic modes acquire positive quadratic curvature and are Gaussian-suppressed. The decay of this sphaleron therefore proceeds along an isotropic channel, providing an intrinsic quantum-gravitational mechanism for dynamical isotropization. This isotropization effect is robust under the inclusion of a slow-roll inflaton, and no analogous isotropic sphaleron exists for spatially flat or hyperbolic geometries. Taken together, these results recast the Lorentzian CSK functional as a chiral sphaleron that naturally prepares an approximately isotropic de Sitter background for inflation. Beyond this phenomenological study, we further suggest that the CSK functional can be understood as a boundary functional for a class of anomaly-free objects, including a complexified generalization of the Hartle-Hawking state.
We present a framework for inferring the dark matter halo masses of quasars and [O III]-emitting galaxies from JWST/NIRCam Wide Field Slitless Spectroscopy (WFSS) clustering measurements at z approximately 6. Using the FLAMINGO-10k N-body simulation, we construct mock realizations of quasar and galaxy catalogs that incorporate realistic selection functions, spatial coverage, and sensitivity limits matched to the ASPIRE survey. These mocks enable accurate measurements of the quasar-galaxy cross-correlation and galaxy auto-correlation functions, with covariance matrices derived from 1000 realizations that capture both cosmic variance and bin-to-bin correlations. We employ Bayesian inference to fit the correlation functions and infer the minimum halo masses for quasars and galaxies. Our results demonstrate that Poisson pair-count uncertainties, commonly adopted in high-redshift clustering studies, significantly underestimate the true measurement errors. The dominant missing component is cosmic variance: even the diagonal of the full covariance matrix exceeds the Poisson expectation, with off-diagonal bin-to-bin correlations contributing a smaller additional correction. In particular, 1) the commonly used Poisson error on the correlation functions underestimates the true uncertainty by a factor of approximately 3; 2) the uncertainties on the inferred minimum halo masses are underestimated by a factor of approximately 1.5-3 when adopting Poisson errors instead of the full covariance matrix; 3) the inferred QSO halo mass is robust to whether central and satellite [O III]-emitters share a common mass threshold. Our framework provides a more complete error budget for JWST/WFSS clustering analyses, enabling robust constraints on the host halo masses and duty cycles of high-redshift quasars and emission-line galaxies.
We constrain uncorrelated primordial isocurvature perturbations using a combination of large- and small-scale cosmological probes, with the small-scale data provided by the ultraviolet luminosity function (UVLF) -- a measure of number density of galaxies as a function of UV brightness. We consider several isocurvature modes, including cold dark matter, baryon, neutrino density, neutrino velocity, and dark radiation perturbations. The isocurvature power spectrum is modeled using two independent parameterizations: a broken power law and a running power law, without fixing the spectral index a priori. Our analysis combines large-scale data from the Cosmic Microwave Background (CMB), baryon acoustic oscillations, and Type Ia supernovae with small-scale constraints from UVLF measurements obtained by \textit{HST} and \textit{JWST}. The UVLF probes matter fluctuations over a continuous range of intermediate scales, $k \sim 0.5$--$10~\mathrm{Mpc}^{-1}$ over a wide range of redshift $4\lesssim z \lesssim 13$, providing a direct handle on structure formation in a regime where constraints on the scale dependence of isocurvature perturbations remain comparatively limited. Our result represents the first UVLF-based constraint on model-agnostic isocurvature perturbations carried by various components. We construct $68\%$ and $95\%$ credible envelopes in $k$-space for the allowed isocurvature power and find good agreement between the envelopes for the $95\%$ envelope across a wide range of scales, indicating that our constraints are mostly insensitive to the assumed power-law form.
The Spherical Fourier-Bessel (SFB) basis, in separating the angular and radial modes of the power spectrum, permits a targeted identification and mitigation of systematics in clustering surveys while retaining more cosmological signal than traditional bases. We demonstrate this principle on the eBOSS DR16 LRG and QSO samples, identifying modes which may be contaminated by systematics. Our initial inference on the LRG sample yields an fNL value consistent with zero, while the QSO value is in slight tension with zero. Using the SFB basis, we vary the selection of angular and radial modes to search for inconsistencies in the inferred value of fNL, an indicator of underlying systematics. In the QSO sample, we find evidence (p < 0.005 compared to the same cuts on EZMocks) of a systematic afflicting large physical scales, which is consistent with residual stellar contamination; we also find evidence (p < 0.05) for an unknown systematic in the QSO and LRG samples at the approximate angular plate and imaging scale of eBOSS.
The cosmological signal stays inside 2Ο credible intervals for nearly all scales even with calibration and ionospheric residuals.
abstractclick to expand
The redshifted 21\,cm line is an emerging tool in observational cosmology that can serve as a direct probe of the intergalactic medium throughout the cosmic timeline. However, the observation of the cosmological 21\,cm signal from early epochs is extremely challenging in practice, regardless of the scale of interest and redshift. The presence of bright astrophysical foregrounds and residual systematic errors along the line of sight poses challenges for its detection. Machine-learning-based Gaussian process regression\,(ML-GPR) has proven to be the most effective strategy for signal separation in LOFAR and NenuFAR observations to measure the 21\,cm signal power spectrum from the Cosmic Dawn\,(CD) and Epoch of Reionization\,(EoR). In this work, we extend this framework to synthetic CD/EoR SKA1-Low observations to assess its robustness in mitigating residual foregrounds against instrumental and environmental systematic effects. We use our developed end-to-end realistic simulation pipeline (\textsc{21cmE2E}) for SKA-Low observations. Our 4-hour tracking simulation includes extragalactic point sources, the AA* telescope configuration, primary beam response, and error models. The modelled errors incorporate residual antenna-based gain calibration errors, residual ionospheric phase errors, partial de-mixing of the out-of-field sources, and instrumental noise for 1000\,hours of deep integration time. We compare different Bayesian GPR frameworks to assess their ability to suppress residual foreground contamination while minimizing signal loss and providing reliable uncertainty estimates. Our analysis demonstrates that the 21\,cm signal can robustly recover within the $2\sigma$ credible interval for almost all k-modes over the range of $0.06 \leq k \leq 1.0$~h Mpc$^{-1}$.
Numerical evaluation of power spectra identifies viable regions for two generalized models while recovering the original limit at specific d
abstractclick to expand
In this work, we perform a numerical study of three Starobinsky--type inflationary scenarios: the $\alpha$--Starobinsky inflationary model, the power--law Starobinsky inflationary model, and the power--law $\alpha$--Starobinsky inflationary model. For an appropriate choice of parameters, each scenario reproduces the standard Starobinsky limit. For each case, we derive the relevant slow--roll expressions in order to compute numerically the scalar and tensor power spectra over the corresponding parameter space and evaluate the associated inflationary observables. Finally, we provide a comparative analysis in the $(n_\sca,A_\sca)$ and $(r,n_\sca)$ parameter spaces using contour plots. Our results indicate that, for certain choices of parameters, the $\alpha$--Starobinsky model and the power--law $\alpha$--Starobinsky model are favored by \textit{Planck} 2018 observations.
Simulation-based inference (SBI) enables parameter inference by training neural networks on forward simulations. It is being applied both for intractable likelihoods as well as under time constraints on the posterior sampling. After motivating situations in which SBI is useful, we give a pedagogical description of the basic techniques. These are posterior, likelihood, and ratio estimation. Alternatives, sequential versions, and learned summaries are discussed briefly. We provide a brief guide to choosing among the techniques in practical scenarios. SBI needs to be verified through diagnostics since failures can be subtle but would invalidate the inference result. We explain the most common diagnostic techniques. We briefly list some recent SBI applications in the cosmology and astrophysics literature. Before concluding, we discuss current methodological challenges. We identify training with limited simulation budgets as the critical problem for applications to cosmology and astrophysics.
In multifield inflation driven by $d$ scalar fields, $O (d)$ symmetry renders the number of fields irrelevant at classical level. This ceases to be the case once stochastic effects are accommodated. The statistical quantities such as the mean number and the variance of $e$-folds as well as the primordial power spectrum and its scale dependence are perturbatively calculated in a small-noise regime. In particular, a general formula is derived for arbitrary higher-order statistical moments of the stochastic number of $e$-folds at all perturbative orders, keeping the dependence on the number of fields fully analytical. It is also discussed that the requirement for inflation to be successfully terminated puts a theoretical bound on the number of fields from above. Those general results are demonstrated for several $O (d)$-symmetric models.
The persistent challenge of creating stable de Sitter vacua within string theory undermines the observational validity of the $\Lambda$ cold dark matter (CDM) model. This difficulty suggests that the concordance model of cosmology, characterized by a constant dark energy $\Lambda$, may reside in the swampland of inconsistent quantum gravity theories rather than the string landscape of consistent ones. Recent observational data, particularly from the Dark Energy Spectroscopic Instrument (DESI), have significantly challenged $\Lambda$CDM cosmology. Specifically, the combination of DESI baryon acoustic oscillation measurements with cosmological surveys seem to indicate a preference for a dynamic, time-evolving dark energy rather than a constant, with roughly 10\% reduction in density over the last several billion years. This review summarizes significant advancements made over the past two years in linking DESI findings to string-inspired scenarios.
Conventional cosmological initial condition generators are designed exclusively for fully periodic cubic domains and cannot produce the non-periodic, observer-centric configurations required by stereographically projected N-body codes such as StePS. We present STEPSIC, an open-source initial condition generator that extends Lagrangian perturbation theory-based initial conditions to the spherical and cylindrical geometries used by StePS, while also supporting cuboid domains with arbitrary aspect ratios. The code constructs Gaussian random density fields on anisotropy-free Fourier grids with cubic voxels, applies first- and second-order LPT to obtain displacement and velocity fields, and interpolates these onto particles via B-spline mass-assignment kernels with Fourier-space deconvolution. For stereographic geometries, a multiresolution scheme maps displacement fields across the radially varying particle mass resolution intrinsic to the projection. Both standard and paired-and-fixed variance-reduced realizations are supported. In periodic cubic boxes, the recovered matter power spectrum agrees with the input linear theory prediction to better than 0.5% up to half the Nyquist wavenumber, independent of box aspect ratio (tested up to 10:1). Cross-validation against monofonic using identical white noise fields yields sub-percent power spectrum agreement, with a small residual offset consistent with differences between two independent implementations. Full N-body evolution of matched cylindrical StePS runs confirms that second-order LPT correctly suppresses the 2-3% transient power excess present in first-order initial conditions.
Recent baryon acoustic oscillation (BAO) distance measurements, when combined with Cosmic Microwave Background (CMB) observations in the $\Lambda$CDM framework, lead to a preference for negative neutrino masses. We investigate whether this neutrino mass anomaly can be alleviated by a class of astrophysically motivated reionization histories. Using a frequentist analysis, we find that some reionization histories can move the best-fit value of $\sum m_\nu$ to a positive value and bring $\sum m_\nu\simeq0.06~{\rm eV}$ into the 95\% confidence interval. To separate the effect of the total optical depth from that of the details of the reionization history, we compare a high-$\tau$ history with a two-step tanh-like reionization history of the same $\tau$. The resulting $\Delta\chi^2(\sum m_\nu)$ profiles are nearly identical. This indicates that the effect is mainly driven by the total optical depth, while the details of the reionization history play only a minor role.
This chapter reviews how machine learning (ML) can be used to extract astrophysical and cosmological information from redshifted 21 cm observations of the cosmic dawn and the Epoch of Reionization, with an emphasis on SKA-Low science. We first summarize the basic physics of the global signal and spatial fluctuations, highlighting why the signal is intrinsically non-Gaussian and highly sensitive to poorly constrained properties of early galaxies and radiation backgrounds. We then discuss the main analysis bottlenecks that dominate current and future observations: bright foreground contamination, radio-frequency interference, ionospheric distortions, calibration errors, and the computational burden of repeated forward modeling in high-dimensional parameter spaces. Building on this context, we organize the ML literature by its role in the pipeline: observation-domain methods that operate on contaminated measurements and image products, theory-domain methods that accelerate or compress forward modeling, and inference-domain methods that map complex observables to astrophysical and cosmological constraints. The central message is that ML is most useful in 21 cm cosmology when it preserves physically relevant structure and propagates uncertainty explicitly, rather than acting as an opaque replacement for the underlying forward model.
This chapter reviews how machine learning (ML) can be used to extract astrophysical and cosmological information from redshifted 21 cm observations of the cosmic dawn and the Epoch of Reionization, with an emphasis on SKA-Low science. We first summarize the basic physics of the global signal and spatial fluctuations, highlighting why the signal is intrinsically non-Gaussian and highly sensitive to poorly constrained properties of early galaxies and radiation backgrounds. We then discuss the main analysis bottlenecks that dominate current and future observations: bright foreground contamination, radio-frequency interference, ionospheric distortions, calibration errors, and the computational burden of repeated forward modeling in high-dimensional parameter spaces. Building on this context, we organize the ML literature by its role in the pipeline: observation-domain methods that operate on contaminated measurements and image products, theory-domain methods that accelerate or compress forward modeling, and inference-domain methods that map complex observables to astrophysical and cosmological constraints. The central message is that ML is most useful in 21 cm cosmology when it preserves physically relevant structure and propagates uncertainty explicitly, rather than acting as an opaque replacement for the underlying forward model.
Modified Newtonian Dynamics (MOND) is a paradigm that can do away with dark matter at galaxy scales, but displays a residual missing mass discrepancy in galaxy clusters. Prompted by the updated JWST-based gravitational lens model of the Bullet Cluster, I confirm here that this cluster exhibits the same residual missing mass discrepancy as other clusters of similar mass in the MOND context. Moreover, this missing mass should be mostly collisionless, since it is centred on the galaxies of the Bullet Cluster.
We study the postinflationary dynamics of an Einstein-Cartan-Holst gravity-motivated inflationary scenario, known as Einstein-Cartan pseudoscalaron inflation, coupled to a type-I seesaw extension of the Standard Model with three heavy right-handed Majorana neutrinos. In particular, we show that nonthermal leptogenesis emerges as a necessary and self-consistent mechanism for generating the observed baryon asymmetry of the Universe, mainly because of the universal coupling of the inflaton to the additional heavy Majorana fermions. The resulting framework provides theoretical predictions that are fully compatible with the latest cosmological constraints from the Cosmic Microwave Background, Baryon Acoustic Oscillations, and Big Bang Nucleosynthesis, as well as with neutrino oscillation experiments, for a wide range of the fundamental Barbero-Immirzi model parameter $\gamma$, which controls the inflationary and postinflationary phases. In particular, for $\gamma \sim -1/100$ and a lightest Majorana-neutrino mass of order $10^{13}$ GeV, we find a scalar spectral index $n_s \sim 0.970$, a tensor-to-scalar ratio $r \sim 0.004$, for a number of e-folds before the end of inflation $N_e \lesssim 60$, and a baryon-to-entropy ratio $n_B/s \sim 8.7 \times 10^{-11}$.
BAO and supernova measurements combined with Planck CMB show roughly three-sigma preference against a constant dark energy density.
abstractclick to expand
In the last year, several pieces of evidence have pointed to a possible deviation from the standard cosmological model, $\Lambda$CDM. The recent work by the Dark Energy Survey (DES) collaboration reports a preference in the ballpark of $3\sigma$ in favor of dynamical dark energy against the standard cosmological model. For that, it used its final analyses of Baryonic Acoustic Oscillations and type Ia Supernovae, both sensitive to the expansion history of the Universe, in combination with the Cosmic Microwave Background (CMB) from Planck. This adds to the growing debate about the nature of dark energy. Published as a Perspective in Nature Astronomy in August 2025.
Analyses of expansion history indicators show a three-sigma preference when combined with microwave background observations.
abstractclick to expand
In the last year, several pieces of evidence have pointed to a possible deviation from the standard cosmological model, $\Lambda$CDM. The recent work by the Dark Energy Survey (DES) collaboration reports a preference in the ballpark of $3\sigma$ in favor of dynamical dark energy against the standard cosmological model. For that, it used its final analyses of Baryonic Acoustic Oscillations and type Ia Supernovae, both sensitive to the expansion history of the Universe, in combination with the Cosmic Microwave Background (CMB) from Planck. This adds to the growing debate about the nature of dark energy. Published as a Perspective in Nature Astronomy in August 2025.
We present a machine-learning model for generating super-resolution $N$-body simulations with non-vanishing spatial curvature, conditioned on a given low-resolution field, $\Omega_k$, $\Omega_\mathrm{m}$, $\sigma_8$, $h$, and redshift. By upscaling the resolution of $N$-body simulations, such models can drastically reduce the computational cost of producing high-resolution simulations suitable for modelling current and future surveys of large-scale structure. Our model is trained as a generative adversarial network, allowing injected noise to be interpreted as stochastic structure and enabling the generation of an ensemble of plausible high-resolution realisations. We evaluate the model performance by comparing key cosmological summary statistics in the generated simulations to their high-resolution counterparts. We find that the model accurately reproduces large-scale statistics, robustly recovering most of the power that was missing from the low-resolution input, but exhibits a residual suppression of power on small scales of up to $\sim 10\%$ at $k \sim 1\,h\,\mathrm{Mpc}^{-1}$. The abundance of halos around $10^{14}\,M_\odot$ is affected at a similar level, and we find that the profiles of these halos have a lower central density. Although the overall performance is decent, we anticipate that the fidelity of the generative model can be further increased with more and better training data, as well as through improvements in the model architecture and training process. To show a production-scale use case, we apply our model to upscale the resolution of a light cone from a large-volume $N$-body simulation with spatial curvature, producing a first-of-its-kind catalogue that simultaneously captures geometric effects at large scales and accurate nonlinear structure at small scales.
The reionisation time field treion(r) captures the entire history of cosmic reionisation by mapping the moment where each region of the Universe became ionised. Previous work has shown that treion(r) can be inferred from 21-cm observations, using convolutional neural networks (CNNs). However, these CNN predictors are trained on specific reionisation models, raising critical concerns about their reliability when applied to observational data potentially differing from their training assumptions. This paper aims to propose and test a method to evaluate the coherence of our CNN predictors with respect to their input model, thereby enabling the validation or exclusion of underlying reionisation models based on their reconstruction behaviour. By setting the CDM model as reference input, we evaluate the coherence of treion(r) reconstructions by comparing them across different redshifts for several prediction models as the statistics of treion (r) reconstructions should be the same for every redshift of the input maps. Our study particularly investigates CNNs trained on cold and warm dark matter (WDM) models, with WDM particle masses of 2, 3, 5, and 7 keV. We find that the predictors trained on 5 and 7 keV WDM models exhibit high-level self-consistency similar to the CDM predictor, while the 2 keV predictor, and to a lesser extent the 3 keV predictor, display significant deviations across several metrics. These findings seem to demonstrate that CNN predictors retain sensitivity to differences in the underlying reionisation model and can be used to assess model compatibility with observations. Our results highlight the necessity of validating machine-learning predictors against their input models before applying them to real data. The method proposed here offers a pathway to more trustworthy applications of CNNs in the study of reionisation.
Combined observations from multiple experiments set new upper bounds on vector modes with no significant detection but incomplete exclusion.
abstractclick to expand
We present new constraints on gravitational vector perturbations ($\mathcal{V}$-modes) using Cosmic Microwave Background (CMB) data, including temperature and $E$-mode polarization from SPT-3G D1, ACT-DR6, and $Planck$, as well as $B$-mode data from BICEP/Keck and SPTpol, which provide the strongest constraints on $\mathcal{V}$-modes. We consider three initial conditions (ICs) that source $\mathcal{V}$-modes: neutrino isocurvature (ISO), neutrino octupole (OCT), and a sourced mode (SMD) generated by an anisotropic stress before matter-radiation equality. We also consider including tensor modes along with $\mathcal{V}$-modes for each of these ICs. Combining all datasets, we obtain 95\% confidence level upper limits of $r_\mathrm{v} < 1.3\times10^{-4}$ (ISO), $r_\mathrm{v} < 6.8$ (OCT), and $r_\mathrm{v} < 4.2$ (SMD), with slightly tighter bounds when tensors are included, at a pivot scale $k_p\ =\ 0.05$ Mpc$^{-1}$. Interestingly, for SMD without tensors, using SPTpol $B$-modes alone yields $r_\mathrm{v} = 4.7 \pm 2.1$, consistent with zero at $2.2\sigma$. Similar result is found for SMD when including tensor perturbations. No statistically significant deviation from $\Lambda$CDM is found. However, $\mathcal{V}$-modes are not fully excluded by current $B$-mode data and should be considered when interpreting primordial signals.
HI intensity mapping is a promising technique to probe large-scale structure, traditionally analyzed via two-point statistics such as the angular power spectrum. This technique has proven very powerful but may miss key non-Gaussian information present in the signal. We extend the starlet l1-norm, a multi-scale higher-order statistic previously applied to weak lensing maps, to the brightness temperature fluctuations of the HI density field. The HI signal is highly non-Gaussian at late times (z < 1) due to nonlinear structure growth, motivating the use of advanced summary statistics. We simulated full-sky HI lognormal brightness temperature maps using CAMB and GLASS, generating 10,000 realizations with associated cosmological parameters. We extracted both the starlet l1-norm and angular power spectrum from these maps. Using the JaxILI framework, we performed neural density estimation for implicit likelihood inference. The analysis considered simulated maps incorporating realistic noise and telescope beam, capturing the impact of observational effects on parameter inference. In this work, we focus on the redshift range 0.4 < z < 0.45, chosen to match the interval already targeted by existing MeerKLASS observations. The starlet l1-norm significantly outperforms the angular power spectrum in constraining cosmological parameters, achieving almost a 3x improvement in the figure of merit relative to the angular power spectrum by capturing non-Gaussian features missed by two-point statistics. Moreover, our results suggest that the starlet l1-norm is robust to several of the systematic effects included in our simulations. Our findings highlight the potential of multi-scale higher-order statistics such as the starlet l1-norm to enhance cosmological inference from future HI intensity mapping surveys.
They measure luminosity distance directly from signal strength, offering an independent probe of the Hubble constant and dark energy.
abstractclick to expand
The discovery of the gravitational-wave event GW170817 from a binary neutron star merger, together with its multi-wavelength electromagnetic counterparts, marks the beginning of the era of multi-messenger gravitational wave astronomy. Observations of gravitational-wave signals from compact binary mergers enable an independent measurement of the luminosity distance to the source. This implies that gravitational-wave sources can serve as standard sirens to probe the expansion history of the Universe, providing a new approach to constrain cosmological parameters. In this paper, we review the basic principles of using gravitational-wave standard sirens to constrain cosmology. We discuss various methods for determining the source distance and redshift, as well as the capabilities of second and third generation ground-based detectors and space-based detectors in constraining cosmological parameters, especially the Hubble constant and dark energy parameters. By examining two types of standard sirens, binary neutron star mergers with electromagnetic counterparts as bright sirens and stellar-mass binary black hole mergers as dark sirens, we illustrate the methodology, challenges, and future prospects of the standard siren approach.
We investigate the cosmological implications of an extended gravitational framework based on biconnection gravity, constructed from the Schr$\ddot{o}$dinger connection and its dual. In this approach, the difference between the two connections defines the mutual curvature, which encodes the non-Riemannian geometric degrees of freedom, while their symmetric combination reduces to the Levi-Civita connection and hence reproduces general relativity at the background level. Within this setting, we derive the generalized Friedmann equations for a spatially flat Friedmann-Lema\^{i}tre-Robertson-Walker Universe. The resulting equations contain additional geometric contributions that may naturally encode an effective dark energy sector induced by the biconnection degrees of freedom. We explore this extra dark energy by adopting five commonly used parametrizations, namely B$\Lambda$CDM, $\omega$CDM, Chevallier-Polarski-Linder, Barboza-Alcaniz, and a logarithmic equations of state. These considerations are confronted with recent observational data, including DESI DR2, Pantheon$^+$, and CC observations. Our analysis shows that the four parameterizations enter the acceleration phase at almost the same redshifts and share the same current value of the Hubble rate. Furthermore, the statistical comparison based on the Akaike, Bayesian, and Deviance Information Criterion shows that Barboza-Alcaniz, and logarithmic parameterizations have strong evidence and are competitive with $\Lambda$CDM. To classify this biconnection gravity in the plethora theoretical models describing the current cosmic acceleration, we examine its implications through cosmographic tools, including the deceleration, jerk, and snap parameters, as well as through the Statefinder analysis and $Om(z)$ diagnostic.
A Gaussian model supplies simultaneous unbiased estimators for shear, alignments and rotation from the same galaxies
abstractclick to expand
The integral polarization of spiral galaxies in the radio band has been proposed as a new tracer of the intrinsic galaxy shape that augments lensing shear measurements. We revisit the method of shear estimation in this context. We introduce a new statistical model in which galaxy shape and polarization are Gaussian random variables with their covariance characterizing the quality of polarization-shape alignment. Applying the principle of likelihood maximization, we then analytically derive unbiased, minimal-variance estimators, which allow to simultaneously estimate gravitational shear, intrinsic shape alignment and line-of-sight polarization rotation, all at once and accurate to first order in these three effects. New to the literature, our estimators have the merits of being free of biases, robust in situations of few galaxies or poor polarization-shape alignment, allowing analytic reconstruction noise covariance, and minimizing uncertainties in power spectrum estimation, thus resolving conceptual issues of the existing estimation methods. This new analytic framework is generally applicable to future research that exploits the polarization-shape alignment effect of galaxies.
Leptophilic sub-MeV spin-1 dark matter (DM) can be converted into a photon via inelastic scattering with a free electron or absorption by a neutral hydrogen atom in the primordial plasma. We study for the first time the impact of the energy injection resulting from such processes on cosmic microwave background (CMB) anisotropies. We obtain upper limits on the vector and axial-vector DM-electron couplings using Planck 2018 temperature, polarization, and lensing data for DM masses between 100 eV and 100 keV. We find that, due to the suppression of the hydrogen atomic form factor at high energies, inelastic scattering provides the dominant constraint for DM masses above the keV scale. At lower masses, hydrogen ionization through DM absorption is the leading channel, driven by the higher efficiency of post-recombination energy injection in modifying the free-electron fraction. Although the bounds we derive are considerably weaker than existing laboratory and astrophysical limits, they provide a robust and independent cosmological probe of leptophilic DM interactions.
Astrophysical variabilities of Type Ia supernovae (SNe Ia), such as their link with their birth environment, are now one of the leading sources of systematic uncertainties on the measurement of the dark energy equation-of-state parameter $w$. Population studies of SNe Ia, using large samples, give precious insights into these variabilities. We analyse a volume-limited subsample of the ZTF SN Ia DR2 with BayeSN, a hierarchical Bayesian model for SN Ia SEDs. We investigate the distributions of SN Ia light curve parameters and their link with SN environment. Using a new training of BayeSN released in a companion paper, we find a smaller scatter of Hubble residuals compared to SALT. We then investigate the magnitude step, which accounts for the correlation between SN Ia standardised absolute magnitude and host environments. We find a posteriori steps of $0.103\pm0.010$ mag (a $10.1\sigma$ difference from 0) when using global stellar mass as an environmental proxy, and $0.086\pm0.010$ mag ($8.3\sigma$) when using local colour, in accordance with steps computed using SALT light curve fits. This confirms that the large step seen in the ZTF SN Ia DR2 data was not due to the SALT fit or the associated standardisation process. We then investigate the origin of the step, using a BayeSN model which accounts for both an intrinsic magnitude step and differing dust properties with the SN environment. We find a $0.103\pm0.018$ mag ($5.6\sigma$) step in global mass and a $0.085\pm0.019$ mag ($4.5\sigma$) step in local colour. The means of the $R_V$ distribution are similar between different host environments, with $\Delta\mathbb{E}(R_V)\leq0.2$ across all environment proxies, with significances ranging from $0.6\sigma$ to $1.2\sigma$. This is a strong signal of the existence of an intrinsic dependence of SN Ia absolute magnitude on environment.
Astrophysical variabilities of Type Ia supernovae (SNe Ia), such as their link with their birth environment, are now one of the leading sources of systematic uncertainties on the measurement of the dark energy equation-of-state parameter $w$. Population studies of SNe Ia, using large samples, give precious insights into these variabilities. We analyse a volume-limited subsample of the ZTF SN Ia DR2 with BayeSN, a hierarchical Bayesian model for SN Ia SEDs. We investigate the distributions of SN Ia light curve parameters and their link with SN environment. Using a new training of BayeSN released in a companion paper, we find a smaller scatter of Hubble residuals compared to SALT. We then investigate the magnitude step, which accounts for the correlation between SN Ia standardised absolute magnitude and host environments. We find a posteriori steps of $0.103\pm0.010$ mag (a $10.1\sigma$ difference from 0) when using global stellar mass as an environmental proxy, and $0.086\pm0.010$ mag ($8.3\sigma$) when using local colour, in accordance with steps computed using SALT light curve fits. This confirms that the large step seen in the ZTF SN Ia DR2 data was not due to the SALT fit or the associated standardisation process. We then investigate the origin of the step, using a BayeSN model which accounts for both an intrinsic magnitude step and differing dust properties with the SN environment. We find a $0.103\pm0.018$ mag ($5.6\sigma$) step in global mass and a $0.085\pm0.019$ mag ($4.5\sigma$) step in local colour. The means of the $R_V$ distribution are similar between different host environments, with $\Delta\mathbb{E}(R_V)\leq0.2$ across all environment proxies, with significances ranging from $0.6\sigma$ to $1.2\sigma$. This is a strong signal of the existence of an intrinsic dependence of SN Ia absolute magnitude on environment.
Incorporating selection effects directly into predictions enables reliable expansion rate measurements from gravitational waves and galaxies
abstractclick to expand
Gravitational wave sources act as absolute distance indicators, making them powerful probes of the present-day expansion rate of the Universe, $H_0$. The cross-correlation method combines gravitational wave events with galaxy catalogues to constrain cosmological parameters through their shared large-scale structure. In this work, we investigate how key methodological choices -- including covariance treatment, bias parametrisation for galaxies and gravitational wave events, and distance and redshift binning width -- affect the inferred value of $H_0$. We also study catalogue incompleteness, showing that selection effects can be incorporated directly into the theoretical prediction, without the need to model the missing population explicitly, a key advantage over the standard galaxy catalogue approach. Our results indicate that, with appropriate modelling choices and a sufficiently large sample of precise gravitational wave events, the systematic biases considered here can be effectively mitigated, highlighting the potential of the cross-correlation method for future dark siren precision cosmology.
The 3D mass distributions of galaxy clusters are generally triaxial, a geometry that is difficult to constrain from projected observations. In this work, we measure the projected halo shapes of clusters from their weak lensing signatures using the triaxiality functionality in the Cluster Lensing Mass Modeling software, a tool developed by the Dark Energy Science Collaboration to analyze data from NSF-DOE Rubin Observatory's Legacy Survey of Space and Time (LSST). We measure ensemble halo ellipticity on the plane of the sky via axis-aligned stacking and multipole expansion of the weak lensing data. We study a precursor dataset -- the redMaPPer cluster catalog, the metacalibration shape catalog, and the Directional Neighborhood Fitting photometric redshift catalog from the Dark Energy Survey Year 3 public data release. We select clusters that have a high centering probability (>90%) of the identified central galaxy, and use the satellite galaxy distribution to determine the major-axis orientation for stacking. We extend the analysis to the second order of ellipticity in the monopole and quadrupole measurement. The projected ellipticity of the cluster sample is found to be $0.310^{+0.017}_{-0.016}$ (axis ratio $0.527^{+0.018}_{-0.019}$). The projected cluster ellipticity shows no statistically significant dependence on mass and redshift. We further verify the accuracy of the cluster shape measurement using mock catalogs. This analysis is applicable to datasets from upcoming wide-area cosmic surveys such as LSST, Euclid, and the Roman Space Telescope, where larger sample sizes will lead to tighter constraints on the cluster ellipticities.
The third data release of the LOFAR Two-metre Sky Survey provides an unprecedented view of the northern sky at 144 MHz. While compact sources can be efficiently identified with automated software packages, the detection of diffuse radio emission associated with galaxy clusters still requires dedicated processing and visual inspection. Given the scale of current and forthcoming radio surveys, automated approaches based on artificial intelligence are becoming essential to the identification of the most interesting targets. We aim to develop an automated pipeline to construct a catalogue of galaxy clusters hosting diffuse radio emission from LoTSS-DR3 20arcsec images. The pipeline is designed to provide both the probability that a cluster hosts diffuse radio emission and an interpretable image of its shape and morphology. We employed Radio U-Net, a convolutional neural network optimised for image segmentation (i.e. pixel-level identification) of diffuse radio emission. To associate detected emission with individual clusters, we combined the network output with positional, mass, and redshift information from four X-ray- and Sunyaev-Zeldovich-selected cluster catalogues, resulting in a merged sample of 3822 clusters covered by the LoTSS-DR3. We produced a pixel-level segmentation map of the full LoTSS-DR3 and a quantitative indicator for the presence of diffuse emission in each cluster. This enables the selection of sub-samples with specific properties for targeted follow-up or statistical studies. As a demonstration of the first application, we identified a sub-sample of 357 clusters selected at the highest network accuracy (76%), and we showed some examples of newly detected systems. For the second, using a larger statistical sample, we verified that the detection fraction of diffuse radio sources in the four catalogues increases with the mass and redshift of the clusters. [Abridged]
In recent years, improvements in galaxy cluster observations have enabled a variety of tests of fundamental physics using these systems. In this work, we test the constancy of the speed of light, $c$, by combining X-ray gas mass fraction measurements from galaxy clusters with SNe Ia luminosity distance measurements from Pantheon+. We adopt the SH0ES prior on $H_0$ and the $\Omega_b/\Omega_m$ ratio from galaxy clustering observations, thereby minimizing the dependence of our analysis on any specific cosmological model. We explore different assumptions for the cluster mass calibration (mass bias), including \textsc{CLASH}, \textsc{CCCP}, and Planck-based estimates. We find no deviation from a constant $c$ when adopting \textsc{CLASH} or \textsc{CCCP} priors, while Planck-based calibration yields a mild tension, with the hypothesis of constant $c$ being only marginally consistent at the $2\sigma$ level, indicating a non-negligible sensitivity of the results to the adopted calibration scheme.
The particle nature of dark matter (DM) remains one of the most significant enigmas in modern cosmology. Axion-like particles (ALPs), as well-motivated candidates for cold dark matter, can undergo radiative decay into photon pairs, a process that is significantly enhanced in the presence of ambient radiation fields. In this work, we propose a novel probe of $\mu{\rm eV}$-scale ALP DM by cross-correlating radio intensity mapping (IM) with the large-scale galaxy distribution from the 2MASS Redshift Survey (2MRS) in the local universe ($z\leq 0.1$). We develop a comprehensive theoretical framework that incorporates stimulated decay effects driven by both the Cosmic Microwave Background (CMB) and a bottom-up modeled extragalactic radio background (ERB). By forecasting the sensitivity of the Square Kilometre Array (SKA) Phase 2, we demonstrate that this cross-correlation technique provides a promising and complementary approach to searching for ALP DM signals. This study establishes a new proof-of-concept for utilizing next-generation radio telescopes to probe ALP dark matter on cosmic scales.
The Hubble tension is usually expressed as a discrepancy between the low H_0 inferred from Planck CMB data within base \LambdaCDM and the higher value obtained from late-time distance-ladder measurements. This scalar comparison compresses distinct inference problems into one derived parameter: Planck CMB, DESI DR2 BAO, and Pantheon+SH0ES constrain physical densities and acoustic scales, ruler-normalized distances, and calibrated luminosity-distance relations, respectively. We reformulate the comparison in terms of the dimensionless expansion history E(z)=H(z)/H_0. This does not remove the absolute-scale discrepancy, but separates the normalization encoded in $H_0$ from the redshift-dependent shape of the expansion history. Within a common flat-\LambdaCDM framework, each probe posterior is mapped onto posterior-implied E(z) histories. Since the reconstructed values E(z_k) are strongly correlated across redshift, we quantify the global mismatch with a covariance-subspace history displacement S_{hist}, alongside pointwise redshift differences. The histories are not identical, but the discrepancies are moderate: the pointwise significance is typically 1-2\sigma, while S_{hist} simeq 1.65 for DESI DR2 and S_{hist} \simeq 2.55 for Pantheon+SH0ES relative to Planck. With two retained covariance eigenmodes, these correspond to two-sided one-dimensional Gaussian equivalents of approximately 1.1\sigma and 2.1\sigma, both below the conventional \simeq 4.9\sigma Planck-SH0ES scalar-H_0 discrepancy.
Previous work showed that ultralight-dark-matter solitons can provide dynamical friction for supermassive black-hole binaries, suppressing low-frequency power in the pulsar-timing-array gravitational-wave background and constraining the particle mass and effective ultralight-dark-matter fraction. Here we extend that analysis by comparing the predictive performance of four models: simplified and realistic ultralight-dark-matter implementations, a phenomenological environmental-hardening model, and a gravitational-wave-only model. We use Bayesian leave-one-out cross-validation on the five lowest pulsar-timing-array frequency bins. The phenomenological model gives the largest expected log predictive density, but its advantage over the other models is not large compared with the estimated standard errors. The current data therefore do not decisively prefer one model overall. The clearest pairwise result is within the ultralight-dark-matter framework: the simplified model outperforms the realistic implementation in all five frequency bins. Current pulsar-timing-array data are therefore compatible with ultralight-dark-matter-induced low-frequency suppression, but do not yet distinguish ultralight-dark-matter significantly from more generic environmental descriptions of supermassive-black-hole-binary evolution.
Timing of core formation since high-redshift decoupling favors 0.1 cmΒ²/g for field low-surface-brightness galaxies
abstractclick to expand
Dark matter may play an important role in galaxy formation through its non-trivial properties. For example, self-interacting dark matter may contribute to the formation of the widely observed core structures in galaxies. However, galaxy formation is a complex process, and such core structures can also arise from baryonic effects within the cold dark matter framework. To clarify the role of dark matter self-interactions, it is necessary to study systems that evolve without significant baryonic disturbances. Low-surface-brightness galaxies in the field, which are gravitationally isolated and have evolved with minimal external influence, are suitable candidates for this purpose. Since these galaxies typically contain only a small amount of baryonic matter, strong baryonic effects are not expected in their evolutionary history. In this study, we assume that these galaxies decoupled from proto-clusters at high redshift. Based on this assumption, we set initial conditions and estimate the time required for core formation, which we compare with the time corresponding to the redshift of proto-clusters. We examine five low-surface-brightness galaxies in the field and three observed proto-clusters at redshifts z=2.45, 7.66 and 7.88. Our analyses, based on order-of-magnitude estimates without numerical simulations, excludes a self-interaction cross section of sigma/m = 1 cm^2/g, while sigma/m = 0.1 cm^2/g is favored. This result is consistent with constraints derived from the shapes of present-day cluster cores.
We present the first direct computation of spatially averaged dynamical quantities in the local Universe, employing the Cosmicflows-4++ reconstruction and a covariant scalar averaging formalism. We extract the domain-averaged density, expansion rate, spatial curvature, and kinematical backreaction over cosmologically relevant domains around our Galaxy, extending up to a comoving radius of $300~\mathrm{Mpc}/h$. The resulting domain-averaged present-day energy budget features nontrivial variations with scale that reflect a nested structure within the cosmic neighborhood, including a large-scale void shell encompassing the local cosmic web. Remarkably, we find significant contributions to this energy budget from the average spatial curvature at the $\mathcal{O}(10\%)$ level on all probed scales. By contrast, the kinematical backreaction remains much smaller throughout the surveyed volume, reaching at most a $\mathcal{O}(1\%)$ contribution on the smallest scales considered, i.e., $30~\mathrm{Mpc}/h$. Convergence to the global $\Lambda$CDM background is not observed within this range of scales.
Primordial magnetic fields (PMFs) can enhance the abundance of low-mass halos during Cosmic Dawn by sourcing additional small-scale matter fluctuations. This enhanced small-scale power can accelerate early galaxy formation, shifting the timing of Lyman-$\alpha$ coupling, X-ray heating, and reionization toward earlier times and imprinting correlated signatures on the global and fluctuating 21-cm signals. We extend the fast analytic framework {\tt\string zeus21} to include a physically motivated PMF contribution to the linear matter power spectrum, including radiative damping before recombination and magnetic-pressure suppression below the magnetic Jeans scale. The implementation preserves the speed and modularity of {\tt\string zeus21}, enabling efficient exploration of PMF parameter space. For $n_B=-2.9$, we quantify the impact of PMFs on early structure formation and 21-cm observables across a range of fiducial magnetic amplitudes, and forecast detectability with \textit{HERA} and \textit{SKA}. Combining 21-cm forecasts with external CMB priors, we find that upcoming experiments can probe PMFs through their impact on small-scale structure, providing constraints complementary to existing cosmological probes.
Data narrow the vacuum parameter alpha and permit frequency-dependent alpha to ease the blue-tilt problem in the gravitational-wave spectrum
abstractclick to expand
NANOGrav and various pulsar timing array experiments recently reported evidence for a common red noise signal across millisecond pulsars. This signal exhibits Hellings-Downs inter-pulsar correlation patterns, providing compelling evidence for a stochastic gravitational wave background (SGWB) signal. In general, such a background can come from several astrophysical and cosmological phenomena. Assuming such SGWB has an inflationary origin, we use latest NANOGrav 15-year dataset to constrain the inflationary parameters e.g., tensor spectral index ($n_t$), tensor-to-scalar ratio ($r$), and explore the implications for the reheating phase through constraints on the reheating equation of state ($\omega_{\text{re}}$) and reheating temperature ($T_{\text{re}})$. We find the preference for an extremely blue-tilted tensor spectrum $n_t=2.20^{+0.36}_{-1.2}$ and the radiation-like reheating scenario $\omega_{\text{re}}=0.33^{+0.14}_{-0.36}$. Despite having no concrete evidence for the nature of the primordial vacua, the computation of gravitational wave (GW) sourced by tensor perturbations assumes the inflationary vacuum to be a Bunch-Davies vacuum. In this work, we examine modifications to the GW spectrum originating from the non-Bunch-Davies primordial vacuum. We find that NANOGrav observations favour a specific type of non-Bunch-Davies vacuum, known as the alpha-vacuum. Furthermore, our analysis demonstrates that the observations strikingly narrow down the range of the parameter $\alpha$ characterizing the vacua. On top of that, we find that a frequency-dependent parametrization of the vacuum parameter $\alpha$ beyond a threshold frequency can yield a minimal solution to alleviate the blue-titled issue. Finally, we highlight the possibility of testing such frequency dependence of $\alpha$ by probing the GW spectrum through future GW experiments.
A Lagrangian model with local-density mergers shows their clustering power is suppressed over time compared with conserved tracers.
abstractclick to expand
We study the effect of ongoing formation and merger on the assumed number conservation of biased tracers. Using a Lagrangian approach we present a model of the number density which accounts for such effects. The model is nonlocal in time, reflecting the gradual assembly of tracers from the underlying matter. The loss of tracers through merger is modelled by an environmentally-dependent sink, such that the merger rate is proportional to the local number density (higher probability of an event in higher density regions). We derive from our model a formula for the linear bias of non-conserved tracers, showing that such tracers debias more rapidly than conserved ones. Over time the large-scale power becomes increasingly suppressed relative to the conserved prediction, behaviour which has been observed in simulations elsewhere. Implications for current modelling approaches are discussed.
Observing non-Gaussianity in the timing residuals of Pulsar Timing Arrays (PTAs) has recently attracted attention as a potential discriminator between astrophysical and cosmological origins of the observed Gravitational Wave (GW) signal. In this work, we show that even in an idealized signal-dominated setup, after decorrelating data to avoid spurious detections, statistical tests applied to PTA data cannot distinguish between Gaussian and non-Gaussian GWBs in a model-agnostic way. In particular, without making strong assumptions on the GW spectrum or the properties of the population, the sensitivity to any distinctive non-Gaussian feature is washed out.
Motivated by the indications of time-varying dark energy equation of state reported from DESI, we investigate a quintessence model with an exponential potential $V_0 e^{-\lambda\phi/m_{\mathrm{pl}}}$. We derive an analytical relationship between the current equation of state parameter for the quintessence field and the potential parameter $\lambda$ required to realize sufficient duration of radiation and matter domination. Our results provide a useful analytical relation for inferring the potential parameter $\lambda$ from the observed current equation of state parameter. Furthermore, based on this framework, we provide a new analytical upper bound on the potential parameter $\lambda$ for current accelerated expansion. Concretely, we obtain $\lambda<1.94$ by adopting $\Omega_{\phi0}=0.685$.
The simplest flavor of the Effective Field Theory of Large Scale Structure is based on Newtonian equations and describes the nonlinear matter density and velocity using Einstein-de-Sitter kernels. Even in the presence of massive neutrinos, this has been argued to be sufficient for the analysis of data from Stage-III galaxy surveys. In this paper, we show that there exists a simple way to extend the validity range of this framework to more complex problems with a scale-dependent growth factor, while incorporating linear general relativistic (GR) corrections as well. For a given cosmology, an Einstein-Boltzmann code can find the exact gauge transformation that brings the full linear equations of motion of the clustering matter components into a form where they are identical to Newtonian equations for a self-gravitating fluid with scale-independent growth. Non-linear clustering can be consistently computed in this gauge, and the results can be transformed back to the initial gauge in order to incorporate GR and scale-dependent-growth effects. Redshift-space distortions can also be accounted for with a similar strategy. Our method does not incur any additional computational cost. As a showcase, we apply this method to cosmologies with massive neutrinos. For the real-space one-loop power spectrum, we find that the largest deviation between the accurate and standard methods remains below 0.7% for M_nu<0.30 eV. However, in redshift space, it reaches 1.7% for the one-loop quadrupole spectrum at k=0.3 h/Mpc and z=0, with the largest contribution coming from the effect of the cosmological constant on the growth of the velocity field. Our method could be applied to a much wider range of models with more significant scale-dependent growth, as long as a self-consistency condition evaluated by the Einstein-Boltzmann code (on the smallness of a gauge transformation field) is fulfilled.
Recent high precision cosmological observations have revealed several anomalies in the Cosmic Microwave Background (CMB), indicating possible violations of statistical isotropy (nSI). Typically, nSI in the CMB sky is studied in the harmonic space, such as, using the Bipolar Spherical Harmonics (BipoSH) formalism, where the BipoSH coefficients capture the general structure of the angular correlation function. In this work, we present a geometric real space framework to quantify violations of statistical isotropy complementing the BipoSH approach. This geometric approach involves averaging the angular correlation function over all rotated configurations, weighted by Wigner matrices. These rotational averages systematically isolate the nSI components of the CMB sky. They also provide a physical space based route to interpretation of how the BipoSH formalism captures breaking of rotational symmetry. As a demonstration, we consider an analytical dipole modulation model. We numerically implement the rotational average measures and show their agreement with their harmonic space counterparts. The real space approach to quantify nSI could be advantageous in certain scenarios: rotational averages can directly extract nSI information from the correlation function at the level of a given multipole, bypassing the need to compute BipoSH coefficients up to arbitrarily high internal ranks. Importantly, analyzing the temperature map in real space can circumvent the unavoidable partial-sky effects present in CMB observations, which typically complicate harmonic space approaches. We envisage broader applications of this formalism to studies of primordial non-Gaussianity, CMB polarization, and weak gravitational lensing, as well as to the characterization of general random fields on a sphere.
Large galaxy surveys demand fast and scalable estimators for anisotropic clustering statistics beyond the monopole. We present a suite of efficient FFT-based estimators for power-spectrum and bispectrum multipoles, built upon exact conjugation and parity symmetries of spherical-harmonic--weighted Fourier transforms of real fields. These symmetries eliminate redundant magnetic sub-configurations, thereby reducing the computational cost by a factor of 2. For the Yamamoto power-spectrum multipoles, we further decrease the cost of high-order even multipoles by algebraically expressing ${L}_{2n}$ in terms of lower-order Legendre polynomials, thereby measuring modified high-order multipoles using only low-$\ell$ fields with a small and controlled deviation from the traditional definition. We introduce a new TripoSH bispectrum estimator obtained by compressing the Scoccimarro bispectrum along an alternative triangle side, which substantially reduces the FFT scaling for commonly used quadrupole configurations in the large-$k$-bin limit. We also derive an analytic treatment of bispectrum shot noise by integrating spherical-harmonic kernels over the triangle-constrained $k$-space volumes, avoiding additional FFTs or costly spherical-Bessel evaluations and enabling fast and accurate shot-noise subtraction. Based on these optimizations, we also introduce CosmoNPC, an open-source Python package for large-scale-structure clustering measurements.
One-loop back-reaction on CMB modes vanishes under scale separation and adiabatic long modes, preserving standard predictions.
abstractclick to expand
Primordial black holes (PBHs) can be produced from inflation if the primordial curvature power spectrum is strongly enhanced on scales much shorter than those probed by cosmic microwave background (CMB) experiments. In single-field models this typically requires a transient departure from slow-roll, attractor dynamics, for example realized through a brief ultra-slow-roll phase. In these scenarios, there is reasonable concern that large-scale modes, whose statistics is tightly constrained by CMB observations, might back-react to the amplified perturbations on much shorter scales. In a perturbative expansion for the long-mode power spectrum, this effect first appears at 1-loop. In these proceedings we summarize recent works on this issue, based on the application of the separate-universe framework and its general extension with multi-point propagators. We show that back-reaction at 1-loop is due to either (i) non-linear super-horizon evolution, or (ii) 1-loop-corrected initial conditions. By assuming separation of scales and adiabaticity of the long mode, we show that the 1-loop back-reaction is not observable and large scales decouple from enhanced short ones. While we demonstrate that PBH production within single-field inflation does not disrupt large-scale predictions, we close by discussing scenarios to which our results do not apply.
Zel'dovich and fitting methods miss post-shell-crossing evolution, so full numerics are needed to map signals across detector bands and test
abstractclick to expand
We study the dynamics of the collapse of a nonspherical overdense patch during an early matter-dominated era and the associated production of gravitational waves (GWs) using a semirelativistic N-body framework that we develop. The collapsing patch is initialized through a Zel'dovich deformation of a homogeneous sphere and evolved in an Einstein--de Sitter background, while the emitted signal is computed directly from the numerical quadrupole evolution. We show that a reliable prediction of the signal requires a fully numerical treatment of the nonlinear collapse dynamics. In particular, fitting-based procedures and Zel'dovich-based estimates fail to capture the post-shell-crossing evolution and can over/under-estimate the emitted power of the GWs. After averaging over realizations weighted by the Doroshkevich and BBKS (peak theory) distributions, we find that the two spectra have similar shapes and remain within the same overall order of magnitude at the peak amplitude, although the BBKS result is systematically smaller. The dominant contribution arises from peaks of relatively modest height, around $\nu \simeq 3$, while a larger variance significantly enhances the signal. Finally, by varying the horizon mass and reheating temperature, we map the present-day GW spectra to the sensitivity bands of different classes of detectors. In this way, the signal can populate a broad range of frequencies, from pulsar timing arrays to very high-frequency experiments, showing that GWs from nonspherical collapse can provide a probe of the pre-BBN thermal history.
The collapse of supermassive stars (SMSs, $M\gtrsim10^4\,M_\odot$) to black holes is accompanied by a prodigious flux of neutrinos of all flavors. These are produced thermally via $e^\pm$ annihilations, mostly in the core and just before gravitational trapped surface formation. There, the ratio of fluxes for $\nu_e\bar{\nu}_e$-pairs to $\nu_{\mu}\bar{\nu}_{\mu}/\nu_{\tau}\bar{\nu}_{\tau}$-pairs is $\sim$\,5-to-1. This is because at SMS temperature scales, $\nu_e\bar{\nu}_e$ pairs have both charged and neutral current production channels, whereas $\nu_{\mu}\bar{\nu}_{\mu}/\nu_{\tau}\bar{\nu}_{\tau}$-pairs only have neutral current production channels. We point out that the typical energies of these neutrinos, and the run of density in collapsing radiation-dominated supermassive configurations, leads to Mikheyev-Smirnov-Wolfenstein (MSW) resonances inside these objects for the atmospheric neutrino mass splitting scale, $\Delta m^2_\mathrm{atm.}\sim2.4\times10^{-3}$ eV$^2$. In the normal neutrino mass hierarchy, adiabatic flavor transformation through the MSW resonances would then swap the fluxes $\nu_e\leftrightharpoons\nu_{\mu,\tau}$, whereas, in the inverted neutrino mass hierarchy, the anti-neutrino fluxes are swapped, $\bar{\nu}_e\leftrightharpoons\bar{\nu}_{\mu,\tau}$. We also examine the prospects for collective neutrino flavor oscillations in these environments. Implications for flavor oscillation's effects on neutrino energy deposition and neutrino-induced nucleosynthesis in the SMS's outer layers are examined, as are prospects for detections of SMS collapses through various means.
Contamination from stars in the galaxy samples of large-scale structure surveys can bias cosmological constraints if not tightly controlled. This is especially true for lens samples used for galaxy clustering and galaxy-galaxy lensing probes, where contamination is a primary source of additive systematics. We propose an improved approach to star-galaxy separation and an optimal weighting scheme to jointly mitigate additive and multiplicative contamination of the density field at the map level. Our star-galaxy separation approach exploits the fact that photometric galaxy samples used for cosmological inference populate different regions of color-space than the full photometric dataset on which star-galaxy cuts are typically applied, and therefore optimizes star-galaxy separation for the galaxy samples in each redshift bin. This serves as a complementary approach to morphological star-galaxy separators, which can have complicated dependencies on PSF and blending systematics. We demonstrate the method using the Dark Energy Survey Y3 MagLim lens sample, for which we obtain forced NIR unWISE photometry via cross-matching with DECaLS DR9 to define redshift-bin-optimized color cuts. We identify and remove residual stellar contamination in the DES Y3 lens sample, which varies strongly across redshift bins ($1.3-5.5\%$) and across the footprint.
We present new constraints on the local-type primordial non-Gaussianity parameter, $f_\mathrm{NL}^\mathrm{local}$, through analysis of the scale-dependent bias effect on the cosmic infrared background (CIB). To avoid biases from galactic dust contamination on large scales, we use cross-correlations between the CIB and Planck cosmic microwave background (CMB) lensing maps to constrain non-Gaussianity. Our measurement employs new dust-cleaned CIB maps that have been designed to be unbiased on large scales, which allows us to improve our constraining power on $f_\mathrm{NL}^\mathrm{local}$ by a factor of $\sim 2$ over previous CIB analyses. We derive a constraint of $f_\mathrm{NL}^\mathrm{local}=43 \pm 23$, matching the precision of the tightest existing constraints from cross-correlation methods. Consistency- and null-tests demonstrate that our results are robust to modeling assumptions and residual dust contamination.
We present VERSUS, a publicly available, fast void-finding algorithm designed to identify spherical underdensities in the density field that can be accurately described by excursion set predictions of the void size function. We validate the algorithm against both a synthetic distribution of particles designed to trace a known input void population, and mock galaxy sample built from a $(2\ h^{-1}\text{Gpc})^3$ AbacusSummit simulation populated with a realistic galaxy-halo connection, including systematic effects designed to mimic real survey data. In all cases, VERSUS demonstrates excellent performance, achieving strong agreement with theoretical predictions for the void size function across the range $25 < R \,[\ h^{-1}\text{Mpc}] < 61$ without requiring any post-processing of the void catalogue. The code is user-friendly, modular, and readily applicable to observational survey data. Its computational efficiency further enables the use of simulation-based modelling approaches, facilitating robust and consistent cosmic void analyses with Stage-IV surveys.
We compute the thermal activation rate of metastable self-gravitating Bose-Einstein condensates with attractive self-interaction (e.g., dilute axion stars) by using the instanton theory. Explicit analytical results are given close to the maximum mass $M_{\rm max}$ [P.H. Chavanis, Phys. Rev. D 84, 043531 (2011)] by using the normal form of the saddle-node bifurcation close to that point. We show that the lifetime of metastable states is extremely long, scaling as $t_{\rm life}\sim e^N\, t_D$, where $N$ is the number of bosons in the system and $t_D$ is the dynamical time ($N\sim 10^{57}$ and $t_D\sim 10\, {\rm hrs}$ for typical QCD axion stars; $N\sim 10^{96}$ and $t_D\sim 100\, {\rm Myrs}$ for the quantum core of a dark matter halo made of ultralight axions). Therefore, metastable equilibrium states can be considered as stable equilibrium states in practice. We compare our results with similar results obtained for Bose-Einstein condensates in laboratory, globular clusters and self-gravitating Brownian particles in astrophysics, the Brownian mean field model (BMF) in statistical mechanics, and bacterial populations in biology. Our presentation parallels the calculation of the quantum tunneling rate of dilute axion stars given in a previous paper [P.H. Chavanis, Phys. Rev. D 102, 083531 (2020)]. These calculations can find application in various domains of physics and astrophysics.
UV luminosity function shapes separate escape fraction from star-formation efficiency, yielding the first empirical f_esc(z) from z=7 to 12.
abstractclick to expand
JWST's discovery of unexpectedly bright $z>10$ galaxies has triggered claims that standard $\Lambda$CDM cannot reproduce their abundances, while estimates of the ionizing escape fraction $f_{\rm esc}$ at $z>6$ have spanned a factor of four for over a decade. Here we show that both tensions arise from a structural degeneracy in reionization equations: global observables constrain only the product $f_{\rm esc}\times f_{\star,0}$ (peak star formation efficiency), not individual parameters. We demonstrate that this degeneracy, previously considered a limitation, provides a precise diagnostic framework. By leveraging JWST UV luminosity function shapes to independently constrain $f_{\star,0}$, we derive robust bounds on $f_{\rm esc}$. Joint profile-likelihood analysis across Gaussian, log-normal, and duty-cycle burst scatter models excludes the proposed crisis threshold ($\varepsilon > 3.5\%$) at $4.5\sigma$ confidence, with stochastic star formation histories strengthening rather than weakening the result. Combining these constraints with constant and evolving $f_{\star,0}$ measurements yields the first empirical reconstruction of $f_{\rm esc}(z)$ across $z=7$--$12$. A constant-efficiency scenario ($f_{\rm esc}\approx 10$--$16\%$) connects smoothly to low-redshift direct detections, whereas an evolving scenario ($f_{\rm esc} \approx 6\%$ at $z=12$) conflicts with low-metallicity ISM porosity expectations. JWST Cycle 3--4 will distinguish these pathways at $>2\sigma$, transforming a long-standing fundamental inference barrier into a powerful quantitative probe of early-universe physics.
We present the first end-to-end validation of the Euclid baryon acoustic oscillation (BAO) analysis pipeline, encompassing density-field reconstruction, two-point correlation function measurement, and cosmological-parameter inference. Using eight Euclid-like mock catalogues from each of four Flagship I snapshots, designed to reproduce the expected statistical properties of the first Euclid data release (DR1), we assess the two standard BAO reconstruction methods based on the Zel'dovich approximation, RecSym and RecIso, across $0.9 \leq z \leq 1.8$. The pipeline introduces several methodological advances: an emulator-based model evaluator (Bora.jl) combined with a Hamiltonian Monte Carlo sampler (NUTS), achieving more than a 500-fold speed-up relative to standard Markov chain Monte Carlo, and a semi-analytical covariance estimator (BeXiCov+WinCov) that enables robust error estimates from only eight mock realisations while remaining stable under fiducial-cosmology variations. These components ensure computational efficiency while reducing the risk of underestimating parameter uncertainties. Both reconstruction schemes yield unbiased BAO measurements across all redshifts and analysis choices, including smoothing scale and fiducial cosmology. In each snapshot, reconstruction enhances the figure of merit for $\{\Omega_m, H_0 r_s\}$ by $\sim3$, equivalent to tripling the effective survey volume. Combining the four redshift bins, the improvement remains substantial, with BAO-only constraints reaching $\sim10\%$ precision on $\Omega_m$ and $\sim3\%$ on $H_0 r_s$. Results from RecSym and RecIso are consistent within uncertainties, though we recommend RecSym during testing due to its lower sensitivity to covariance variations. These findings establish the accuracy, robustness, and scalability of the Euclid BAO pipeline for DR1, providing a solid foundation for future cosmological analyses.
The amplitude of the detected stochastic gravitational wave background (SGWB) measured by pulsar timing arrays (PTAs) and the discovery of early and over-massive central black holes at high redshift by the James Webb Space Telescope (JWST) challenge current models of supermassive black hole (SMBH) formation. We study if halos containing a significant population of primordial black holes (PBHs) would increase the amplitude of the PTA signal. PBHs add an iso-curvature component to the matter power spectrum, accelerating the formation and merger of dark matter halos at all redshifts. We propose that black holes in the halo sink to the center via dynamical friction. The central black hole grows through hierarchical merging in addition to the gas accretion channel. We computed the resulting GW amplitude and performed a Bayesian inference analysis using the NANOGrav 15-year dataset. We show that the predicted amplitude of the gravitational wave background agrees with the observations. Our model only requires $0.09\%-0.12\%$ of the total mass of the halo to fall to the center, compatible with a fraction $f_{\rm pbh}\sim 0.1$ of PBHs as dark matter, if the in-falling PBHs in the stellar mass range are about a $1\%$ of the total population, as found in our previous estimation of the formation of SMBHs at $z\sim 6-10$. The PBH model that explains the JWST new found populations of SMBHs also explain the amplitude of the stochastic background of gravitational waves.
Higher-order correlation functions are firmly established as a fundamental tool for the statistical analysis of clustering in modern galaxy surveys. It was demonstrated that they greatly enrich the information content extracted by two-point statistics, allowing us to break the degeneracies between model parameters and constrain departures from Gaussianity. This paper presents the statistical estimators adopted to evaluate the galaxy three-point correlation function and its numerical implementation within the data analysis pipeline of the Euclid Science Ground Segment. Two different algorithms are adopted to count triplets: a direct and exact counting method capable of providing a robust three-point correlation function measurement for any triangular configuration, and a more efficient method based on spherical harmonic decomposition, designed to address the computational challenges of measuring the three-point statistics for data sets as large as those of the final Euclid survey. The spherical harmonic decomposition estimates the Legendre coefficients of the three-point correlation function up to a finite expansion order. Despite being an approximation, the three-point function measured with this approach satisfies the scientific requirements of the mission. We also introduce, implement, and validate the random split technique, which reduces the computational cost of counting triplets in the reference random sample by a factor of 10, without significantly compromising numerical accuracy. We evaluated the robustness, precision, and accuracy of the numerical estimates through an extensive campaign of validation tests, the results of which are presented. Finally, we quantify the computational requirements and their scaling with the expected size of Euclid data set, showing that a complete three-point analysis of the final Euclid survey is within computational reach.
We develop a formalism to characterize the imprints of late-time sources of cosmological fluctuations under the sole assumption that the injection occurs on timescales short compared to the horizon. For post-recombination injections, we derive the general modification of photon geodesics in the presence of scalar, vector, and tensor perturbations, and compute the resulting impact on the Cosmic Microwave Background through the integrated Sachs-Wolfe effect. We show that the signal is generically dominated by instantaneous injections of anisotropic stress. As an application, we consider first-order phase transitions in a sequestered dark sector and show that current observations constrain fractional energy injections at the permille level.
Pantheon+ hemispheric analysis finds Ξq0 = 0.112 signal aligned with CMB direction, suppressed when data-inferred dipole replaces standard v
abstractclick to expand
We present a hemispherical comparison analysis of the deceleration parameter $q_0$ using the Pantheon+ sample of Type Ia supernovae to test the isotropy of cosmic acceleration and the robustness of redshift corrections. We detect directional variations in $q_0$ across redshift frames. Even in the $z_{\mathrm{HD}}$ frame, where corrections for the CMB dipole and peculiar velocities are applied, a residual dipolar anisotropy persists with $\Delta q_0 = 0.112$ and a maximum signal to noise $S/N = 2.155$, aligned with the CMB dipole direction and decreasing with increasing minimum redshift cut. The anisotropy is stronger in the $z_{\mathrm{hel}}$ and $z_{\mathrm{CMB}}$ frames, where kinematic corrections are incomplete, while the transition to $z_{\mathrm{HD}}$ reduces but does not remove the signal. Inferring the dipole from the supernovae data yields $v_{\odot} = 307.26^{+32.00}_{-22.28},\mathrm{km \, s^{-1}}$ toward $(\mathrm{RA},\mathrm{DEC}) = (156.40^{+4.72}_{-4.71}, -3.38^{+5.54}_{-8.23})^\circ$, mildly discrepant with the Planck CMB dipole at the $\sim 1.9\sigma$ level. When this SNe inferred dipole is incorporated into the redshift correction pipeline, the hemispherical anisotropy is suppressed, with the dipolar pattern disappearing and the maximum signal reduced to $S/N \lesssim 1.75$, while the remaining fluctuations become consistent with statistical noise, suggesting that part of the signal arises from residual mismatches in the modeling of the local velocity field. Since current redshift corrections rely on peculiar velocity reconstructions based on the density field, our results suggest a residual bulk flow not fully captured by these models, highlighting a source of systematic uncertainty in low redshift supernova cosmology.
Primordial gravitational waves (PGWs) generate scalar density perturbations at second order. Since the induced density contrast is quadratic in the tensor field, it is intrinsically non-Gaussian. We study the imprint of this tensor-induced non-Gaussianity (NG) on the large-scale clustering of dark matter halos through its correction to halo bias. Focusing on inflationary scenarios with a peaked primordial tensor spectrum, we derive the leading scale-dependent contribution sourced by the bispectrum of the induced density field. While yielding a percent-level bias correction for massive low-redshift halos, this effect can reach an $\mathcal{O}(1)$ modulation for rare, high-redshift halos at $z=7$. Notably, the resulting signature exhibits a distinct scale dependence that is not captured by standard primordial non-Gaussianity (PNG) templates. Our results establish halo bias as a novel probe of PGWs through their imprint on the large-scale structure, providing a complementary window into the inflationary epoch.
We present \texttt{CosmoPostProcess}, a simulation-based forward-modelling algorithm calibrated to reproduce Euclid optical cluster observables. Its main deliverable is a correction for stacked surface-density profiles, binned in richness and redshift, accounting for selection systematics in richness-selected samples relative to unbiased references. We focus on the Euclid richness definition foreseen for cosmological analyses, which does not apply a colour selection; red-sequence richness is not considered. The algorithm processes $N$-body simulations by painting galaxies with a halo-occupation model and emulating survey detection and richness assignment. We also implement a novel estimate of optical cluster centres from projected galaxy densities, validated against Euclid pipelines. Baryonic effects are included through a correction calibrated on hydrodynamical simulations; the baryon-corrected excess surface density agrees within \(2\,\%\) over \(r\in[0.1,\,5]\,h^{-1}\,\mathrm{Mpc}\). Selection-bias contributions are assessed by varying cosmology and the mass--richness relation. Projection-induced selection bias follows a robust pattern: correlated large-scale structure projected along the line of sight enhances the stacked profile near the one-halo to two-halo transition, peaking at about \(1\,h^{-1}\,\mathrm{Mpc}\) with an amplitude of \(20\!-\!40\,\%\), depending on richness and redshift. The effect is mild at low and intermediate redshift ($z\lesssim0.7$), at the few-percent level, but becomes more relevant at higher redshift ($z\gtrsim0.7$). Baryonic modifications remain sub-dominant outside the core, at about \(2\,\%\) beyond \(r\gtrsim0.3\,h^{-1}\,\mathrm{Mpc}\). The framework delivers radial profile corrections with uncertainties, combining projection-induced selection bias, baryonic physics, and miscentring, to control systematics in Euclid DR1 cluster cosmology. (abridged)
Probes including supernovae, DESI and gamma-ray bursts favor flat models while H0 preferences shift with dataset choice
abstractclick to expand
We investigate deviations from the cosmic distance duality relation adopting model-dependent and -independent approaches using i) a Taylor expansion, ii) a power-law parameterization, iii) a logarithmic correction, iv) a (2;1) Pad\'e polynomial and v) a second order Chebyshev parameterization. We derive constraints on all parameters using observational Hubble data, galaxy clusters, type Ia supernovae, DESI data and gamma-ray bursts. Through Monte-Carlo Markov chain analyses adopting the Metropolis Hastings algorithm, we find no significant violation of duality, then model selection criteria favor flat scenarios even though a slight curvature is not totally ruled out. For the $H_0$ tension we find a preference at $1$-$\sigma$ for $h^R_0=0.730\pm0.010$ from supernovae when dropping DESI data and for $h^P_0=0.674\pm 0.005$ from Planck when using DESI and gamma-ray bursts.
Each strongly lensed image of a quasar behind a lensing galaxy (or galaxy cluster) is composed of a swarm of micro-images. This is a result of microlensing due to stellar-scale substructure in the lens. The presence of microlenses forms a network of micro-caustics, and a source transiting these micro-caustics gives rise to variation in observed strongly lensed images. These micro-image swarms are currently observable only through collective intensity fluctuations, which hide the underlying information on the stellar (and compact dark matter, if any) mass functions within the swarm. To unlock the information present in micro-image swarms, it is necessary to explore new techniques. In this work, we study the prospects of determining the micro-image swarm size in lensed quasar images using the intensity interferometry (i.e., the Hanbury Brown & Twiss effect). We consider QSO 2237+0305 and PS J0147+4630, two of the brightest lens quasars in the sky, and study micro-image swarm features in visibility space for both macro-minimum and macro-saddle-point images. At the end, we argue that, with ongoing and expected technical advances, observations of micro-image swarms are plausible, at least for the brightest lensed quasars.
The cold dark matter model successfully describes the Universe on large scales, yet faces challenges at sub-galactic scales. Ultralight dark matter (ULDM), with particle masses around $10^{-22} \mathrm{eV}$, offers a promising solution to these small-scale issues. Pulsar Timing Arrays (PTAs), designed to detect nanohertz gravitational waves, can also provide a sensitive probe for ULDM signals. In this work, we perform a Bayesian search for ULDM using PTA data sets, focusing on two types of signals: the oscillatory gravitational potential from scalar ULDM and the fifth-force interaction mediated by dark photon dark matter (DPDM). We incorporate pulsar distances in the analysis to better model the ULDM density. No statistically significant evidence for ULDM has been found, therefore we place 95% confidence-level upper limits on the relevant parameters. For scalar ULDM, our analysis does not exclude the scenario in which ULDM constitutes all of dark matter. The constraints from PPTA-DR3 show significant improvements over the earlier PPTA-DR2 (2018 Preview) across most of the mass range, and are consistent with the recent uncorrelated limits from other PTAs. We also present for the first time the DPDM constraints using EPTA data. The obtained bounds on the DPDM from the EPTA-DR2 and PPTA-DR3 are comparable to existing constraints.
The pristine underdense patches of the Universe, cosmic voids, are powerful cosmological laboratories, uniquely sensitive to dark energy, modified gravity, and neutrino masses, yet their baryonic content remains uncharacterized. We present the first observational constraint on baryon underdensity in void interiors, exploiting the dispersion measures (DMs) of Fast Radio Bursts (FRBs) as tracers of the free electron column, independent of gas phase, temperature, and metallicity. By stacking 3,455 sightlines from CHIME/FRB on 1,288 SDSS BOSS voids over redshifts $0.2 < z < 0.7$, we measure a DM deficit toward void centers at $3.2\sigma$ significance, establishing that diffuse baryons inhabit the emptiest corners of the cosmic web at a suppressed level. The measured signal amplitude is consistent with an effective Universe model built directly from the observed galaxy underdensity in these voids, and a baryonic model calibrated to the FRB DM-redshift relation ($\alpha_v = 1.80 \pm 0.87$). A uniform-density void model yields an electron density contrast of $\delta_\mathrm{e,v} = -0.58 \pm 0.30$, implying a $\sim 60$% underdensity of baryons in void interiors relative to the cosmic mean. Jointly interpreting our FRB measurement with existing stacks of the thermal Sunyaev-Zel'dovich effect on voids further constrains the mean void gas temperature to $T_\mathrm{e} \lesssim (1.1 \pm 0.7) \times 10^6$ K, pointing to a warm-hot diffuse phase, consistent with hydrodynamical simulation predictions. With forthcoming FRB (CHORD, DSA, SKA) and galaxy (DESI, LSST, Euclid, PFS-Subaru, SPHEREx, Roman) surveys, set to expand both samples by orders of magnitude, this approach opens a new window onto tomographic baryon mapping, with direct implications for feedback models governing gas expulsion into low-density environments, and for the use of cosmic voids to extract cosmological constraints.
Non-parametric Hubble reconstruction from chronometers combined with 130 bursts yields a low-redshift baryon density consistent with early-
abstractclick to expand
In this study, we use a sample of 130 well-localized fast radio bursts (FRBs) to constrain the physical baryon density $\Omega_{\rm b}h^2$, and the astrophysical contribution from host galaxies. The cosmological dependence entering the intergalactic dispersion measure is described through a non-parametric reconstruction of the Hubble parameter $H(z)$ obtained from cosmic chronometer data using the \texttt{ReFANN} neural-network framework, independently of the FRB sample. Within a Bayesian analysis, we jointly infer $\Omega_{\rm b}h^2$ and the parameters of a log-normal host-galaxy distribution, namely its median $e^\mu$ and logarithmic scatter $\sigma_{\rm host}$, using both real FRB data and a mock catalog. For the real sample, we obtain $\Omega_{\rm b}h^2=0.02236\pm0.00090$, $e^\mu=178.15^{+16.51}_{-16.97}~\mathrm{pc}\,\mathrm{cm}^{-3}$, and $\sigma_{\rm host}=0.794^{+0.064}_{-0.067}$. For the mock catalog, we find $\Omega_{\rm b}h^2=0.02248\pm0.00018$, $e^\mu=182.36^{+6.83}_{-6.48}~\mathrm{pc}\,\mathrm{cm}^{-3}$, and $\sigma_{\rm host}=0.711^{+0.024}_{-0.025}$. The baryon density constraint from the real FRB sample is in excellent agreement with both Big Bang Nucleosynthesis and Planck CMB determinations, differing from their central values by only $\simeq 0.05\%$. The mock analysis further illustrates the potential of future FRB samples, reducing the uncertainty on $\Omega_{\rm b}h^2$ to the sub-percent level while remaining statistically consistent with early-Universe constraints. Our findings show that combining FRB dispersion measures with a non-parametric reconstruction of the expansion history provides a robust pathway to constrain both cosmological and astrophysical parameters, establishing FRBs as a complementary low-redshift probe of the baryon density.
Review of four decades shows tensions in Hubble and clustering point to undiscovered laws for dark components.
abstractclick to expand
This paper presents a necessarily incomplete review of the evolution of cosmology since the first Astro/Cosmo Moriond meeting in 1981. I trace the journey from the classical Big Bang model based on three pillars -- universe expansion, primordial nucleosynthesis, and the cosmic microwave background -- to the modern $\Lambda$CDM paradigm and the discovery of cosmic acceleration. I discuss major observational milestones: the COBE discovery of CMB fluctuations, the CMB measurements of the flat universe, the pivotal discovery of accelerated expansion through Type Ia supernovae and the emergence of precision cosmology with Planck. I review current tensions in cosmological parameters, particularly the Hubble tension and $\s8$ discrepancies, and discuss future prospects from large-scale structure surveys like DESI. The emergence of ``Big Bang 2.0'' reflects the profound paradigm shift from a model based on standard physics to a dynamical cosmos dominated by dark matter and dark energy, the description of which requires a physics that has yet to be developed and validated.
We measure the kinetic Sunyaev-Zel'dovich (kSZ) signal through a joint analysis of the pairwise kSZ effect and galaxy clustering using CMASS galaxies and ACT DR6 maps. This approach breaks degeneracies between the optical depth and nuisance parameters, enabling a reconstruction of the halo optical depth profile as a function of aperture scale. The kSZ signal reaches its highest signal-to-noise ratio of 7.2 at an aperture radius of $\theta_{\rm AP} = 2$ arcmin, while the full profile rejects the no-kSZ hypothesis at $8.7\sigma$. Applying the same analysis pipeline to the Websky simulation, we find that the observed optical depth profile is somewhat more extended than the simulated one. This difference suggests that baryonic feedback in the real Universe may be stronger and redistribute gas to larger radii more efficiently than modeled in the simulation, although residual systematic effects and modeling uncertainties remain to be further investigated.
Cosmic microwave background anisotropies encode crucial information about the early Universe and fundamental cosmological physics. Although the standard $\Lambda$CDM model provides a successful description of cosmic evolution, persistent cosmological tensions and subtle small-scale anomalies still challenge its internal consistency. In this paper, we investigate six phenomenological amplitude parameters $A_{\rm{new}}$ (new=L, SW, Dop, eISW, lISW, Pol) corresponding to the key effects related to CMB anisotropy: the Lensing, Sachs-Wolfe, Doppler, early Integrated Sachs-Wolfe, late Integrated Sachs-Wolfe, and Polarization effects, respectively. Using modified CAMB and Cobaya packages, we constrain the $\Lambda$CDM$+A_{\rm{new}}$ models with two data combinations: Planck+DESI+PantheonPlus (PDP) and Planck+ACT+DESI+PantheonPlus (PADP). Only the $\Lambda$CDM+$A_{\rm{L}}$ is favored by AIC, with $A_{\rm{L}}=1.0656_{-0.0303}^{+0.0304}$ from PDP and $A_{\rm{L}}=1.0795_{-0.0289}^{+0.0260}$ from PADP, which implies 2.16$\sigma$ and 3.06$\sigma$ deviation from the $\Lambda$CDM model; values of $A_{\rm{SW}}$ show 1.21$\sigma$ and 1.96$\sigma$ deviations to 1; $A_{\rm{lISW}}$ is poorly constrained because the lISW effect has negligible influence at $\ell \geq 30$; and others are consistent with the $\Lambda$CDM model. Moreover, no noticeable improvement on the Hubble and $\sigma_8$ tensions is found within these one-parameter extended scenarios. ACT DR6 high-$\ell$ data strengthens the $\Lambda$CDM$+A_{\rm{L}}$ preference over the $\Lambda$CDM model, and reduces $A_{\rm Pol}$ uncertainty by more than one order of magnitude, highlighting the importance of ground-based high-$\ell$ observations for future CMB analyses.
Late energy transfer from dark matter to dark energy fits CMB, BAO, supernovae and RSD data with minimal change to the sound horizon.
abstractclick to expand
The considerable difference between early and late universe measurements of the Hubble constant, called the Hubble tension, poses a potential challenge to the standard $\Lambda$CDM cosmological model. We examine an interacting dark matter-dark energy model, $\Lambda_s$CDM, characterized by a gauge-invariant coupling $Q = \xi H\rho_{\mathrm{de}}$ and an effective pressure dynamically induced within the dark matter fluid. Using the CLASS Boltzmann code modified in this work, we analyze both the background and perturbation observables and compute an extensive Markov Chain Monte Carlo analysis with the latest cosmological datasets, including observational Hubble parameter data, Planck 2018 CMB compressed likelihood, BAO (from DESI DR2), Pantheon+ Type Ia supernovae, and redshift-space distortion measurements. The model predicts $H_0 = 71.8_{-0.3}^{+0.4}\mathrm{kms^{-1}Mpc^{-1}}$, reducing the tension with the SH0ES local measurement from about $5\sigma$ in $\Lambda$CDM to $1.2\sigma$ in $\Lambda_s$CDM. In contrast to the early dark energy model, the resolution emerges from late-time modification of the expansion history induced by the energy transfer from dark matter to dark energy. Moreover, the model suppresses late-time structure growth, providing $\sigma_8 = 0.744 \pm 0.0185$, lying below the $\Lambda$CDM value and moves in the direction preferred by weak lensing surveys. Since the interaction term is suppressed at high redshift, the pre-recombination sound horizon departs by less than $1\%$ from its $\Lambda$CDM value, suggesting that the alleviation of the tension dominantly originates from the late-time expansion rather than early-universe effects. We conclude that $\Lambda_s$CDM constitutes a phenomenologically viable interacting dark sector framework that addresses key cosmological tensions while remaining consistent with current precision data.
}