pith. machine review for the scientific record. sign in

astro-ph.IM

Instrumentation and Methods for Astrophysics

Detector and telescope design, experiment proposals. Laboratory Astrophysics. Methods for data analysis, statistical methods. Software, database design

0
astro-ph.IM 2026-05-13 Recognition

Marginal estimator cuts noise errors in solar deconvolution

Marginal multi-object multi-frame blind deconvolution

By integrating out the objects, the method adds a log-determinant term that prevents noise from being fitted as high-order aberrations.

Figure from the paper full image
abstract click to expand
High-resolution ground-based solar imaging relies heavily on multi-object multi-frame blind deconvolution to correct for atmospheric turbulence. However, the traditional joint maximum likelihood estimation methods in which object and the atmospheric aberrations are estimated together face some problems. In this paper, we introduce a marginal estimator for the multi-object multi-frame blind deconvolution problem. By employing a framework to marginalize over the observed objects, we develop a reconstruction method that offers several distinct advantages over joint estimation. First, the marginalization provides enhanced regularization that naturally accounts for object uncertainty, successfully preventing the reconstruction algorithm from erroneously assigning noise to high-order aberrations. Second, the marginal estimator yields more contrast control, as it is much less sensitive to the hyperparameters dictating the power spectral density (PSD) of the object. This robustness allows these hyperparameters to be optimized, enabling a ``plug-and-play'' deployment that removes the need for manual tuning. Finally, we demonstrate that the proposed method is accessible and simple to implement, requiring only the addition of a log-determinant term to the traditional merit function. With minimal modifications required for existing blind deconvolution pipelines, the estimator has been fully integrated into the open-source torchmfbd package for its use by the solar physics community.
0
0
astro-ph.IM 2026-05-13 2 theorems

Ground VLF detects most solar flares within rise time

Real-time detection of solar flares from ground-based VLF data

Phase trend changes from ground stations give low-latency alerts and a satellite-independent backup while estimating X-ray levels.

Figure from the paper full image
abstract click to expand
A method for real-time solar flare detection and characterization using ground-based Very Low Frequency (VLF, 15-45 kHz) data is presented. The D-region, the ionosphere's lowest region, is monitored by VLF waves propagating in the Earth-Ionosphere waveguide. The D-region electron density increases during sudden surges in X-ray radiation from solar flares. This subsequently enhances HF absorption. By seeking trend changes in VLF phase data, an incremental algorithm finds solar flares. 82.7% of M and X solar flares are detected within one fourth of their rise time. In addition, several VLF transmitters are monitored simultaneously. Combining information from their phase variations leads to an estimation of the Sun's X-ray flux. Last, propagation models such as LMP or LWPC are combined with the VLF measurements to compute D-region electron density profiles. This method and its implementation in a new Python package are a step towards building a more resilient system for flare detection and alerts. Its reliance on ground-based data alone ensures an easy maintenance and a backup in case a satellite failure. It also provides alerts comparable to or faster than those obtained through satellite data, due to shortened data latency.
0
0
astro-ph.IM 2026-05-12 Recognition

Three-channel camera splits light into simultaneous color bands

Optical Design of OPTICAM-ARG: A Three-Channel High-Time-Resolution Camera for the Jorge Sahade Telescope

OPTICAM-ARG reaches 9.1 m effective focal length and 8.4 arcmin fields per channel by adding wedge angles to dichroics.

Figure from the paper full image
abstract click to expand
We present the optical design of OPTICAM-ARG, a multi-channel instrument for the simultaneous acquisition of images in three spectral bands at the Cassegrain focus of an f/8.5 telescope, covering the 0.35 to 1.00 um wavelength range. The converging beam delivered by the telescope is spectrally separated by two dichroics into three channels, blue, green, and red, each incorporating a dedicated three-lens focal reducer, an interchangeable SDSS filter stage, and an sCMOS detector. The focal reducers establish an effective focal length of approximately 9.1 m, a uniform plate scale of 22.6 arcsec/mm, and a field of view of 8.4 arcmin x 8.4 arcmin per channel, consistent with the typical seeing conditions at the site. Operation of the dichroics in a converging beam introduces off-axis aberrations, which are mitigated through wedge angles applied to their second surface and optimized as part of the global design. Optical performance is assessed through exact ray tracing using RMS spot radii and encircled energy metrics, with EE50 values further expressed in terms of an equivalent FWHM to enable direct comparison with atmospheric seeing and to evaluate sensitivity to manufacturing tolerances.
0
0
astro-ph.IM 2026-05-12 Recognition

Neural pipeline accelerates Baikal-GVD neutrino candidate selection

From raw data to neutrino candidates: a neural-network pipeline for Baikal-GVD

Transformer networks suppress noise and air showers then select candidates orders of magnitude faster than standard reconstruction.

abstract click to expand
We present a neural-network-based data processing pipeline for Baikal-GVD, designed to improve event reconstruction quality and accelerate neutrino candidates selection. The pipeline comprises three stages: fast suppression of extensive air shower events, suppression of noise optical modules activations, and extraction of high confidence neutrino candidates. All three networks employ a transformer architecture that exploits inter-hit correlations through the attention mechanism. Applied sequentially, the pipeline achieves orders-of-magnitude speedup over the standard reconstruction chain. Moreover, noise suppression neural network surpasses the accuracy of algorithmic noise suppression algorithms and provides estimate for time residuals of the signal hits, which is crucial for identification of track-like hits. We address the domain shift between Monte Carlo simulations and experimental data by incorporating a domain adaptation technique, demonstrating improved agreement between the two domains. The resulting framework enables near-real-time event classification, with direct applications to multi-messenger alert systems and diffuse neutrino flux measurements.
0
0
astro-ph.IM 2026-05-12 Recognition

Text leaves entropy floor in astrophysics method reconstruction

Quantifying the Reconstructability of Astrophysical Methods with Large Language Models and Information Theory: A Case Study in Spectral Reconstruction

LLM tests on TNO spectral pipelines show multiple divergent implementations remain consistent with full methods text.

Figure from the paper full image
abstract click to expand
Modern astrophysical studies rely heavily on complex data analysis pipelines; however, published descriptions often lack the detail required for computational reproducibility. In this work, we present an information-theoretic framework to quantify how effectively a method can be reconstructed from its written description. By treating algorithmic reconstruction as a probability distribution generated by Large Language Models (LLMs), we utilize Shannon entropy and Jensen-Shannon divergence to measure how strongly text constrains the hypothesis space of valid implementations. We demonstrate this approach through a case study of Trans-Neptunian Object (TNO) spectral reconstruction from sparse photometry. By prompting frontier LLMs with varying levels of manuscript text (Title, Abstract, and Methods), we find that while increasing text successfully clarifies the overall algorithmic structure, it fails to eliminate variance at the implementation level. This persistent variance establishes an "entropy floor," demonstrating that multiple divergent implementations remain consistent with explicit instructions. To evaluate practical reproducibility, we convert these reconstructed algorithms into executable pipelines. Our results reveal that, while LLMs easily recover core functional methodologies, they systematically fail to infer the tacit expert knowledge required for strict scientific calibration. This pilot study demonstrates that LLMs can be repurposed as a zero-shot diagnostic tool to audit methodological transparency, helping authors identify missing structural constraints and preserve scientific integrity in an era of automated research.
0
0
astro-ph.IM 2026-05-12 Recognition

Browser tool computes survey footprint overlaps client-side

Survey Footprint Explorer: A Browser-Based Interactive Tool for Visualizing and Cross-Matching Astronomical Survey Footprints

Thirteen major surveys from X-ray to near-infrared can be compared instantly with no server or installation required.

Figure from the paper full image
abstract click to expand
We present the Survey Footprint Explorer (v2.5.0), a browser-based interactive tool for visualising and comparing the sky footprints of major astronomical imaging surveys. The tool is implemented entirely in client-side JavaScript and requires no server infrastructure, making it immediately accessible from any modern web browser. Thirteen survey footprints are currently included: Euclid DR1, LSST Wide-Fast-Deep, the Nancy Grace Roman HLWAS and HLTDS (full and deep tiers), DESI Legacy Imaging Survey DR9, the Dark Energy Survey (DES), the Subaru Hyper Suprime-Cam survey (HSC), the Kilo-Degree Survey (KiDS), the Ultraviolet Near-Infrared Optical Northern Survey (UNIONS), the eROSITA All-Sky Survey (eRASS1), and the Atacama Cosmology Telescope Legacy (ACT) survey spanning wavelengths from X-ray to near-infrared and covering footprints from 7.7 deg$^{2}$ to 21,524.4 deg$^{2}$. Survey footprints are encoded as Multi-Order Coverage (MOC) maps and rendered via two complementary views: an interactive globe powered by Aladin Lite v2, and a full-sky equirectangular projection. All MOC intersection calculations, including multi-survey overlap area computation and per-source membership testing, are performed client-side. Users may upload source catalogues in CSV or TSV format and download an augmented version with boolean survey membership columns appended. The link to access the tool is provided at the end of the Summary section.
0
0
astro-ph.IM 2026-05-12 Recognition

Photonic chip senses solar wavefronts without images

Photonic integrated circuits for astronomy: A formal description of an integrated photonics-based wavefront sensor (IP-WFS)

Mathematical model and simulations confirm direct phase-difference measurements work on extended low-contrast sources.

Figure from the paper full image
abstract click to expand
Context. Solar wavefront sensing has been a challenge for astrophysical instrumentalists, due to the low contrast between the Sun and the sky background compared to night-time observations, which limits the performance of adaptive optics systems. Aims. Wavefront correction in solar physics requires the analysis of extended images; meanwhile, at night the displacement of a punctual object is analysed. This technique limits the spatial resolution, and therefore the accuracy in the wavefront reconstruction. Methods. To solve this problem, a new method of direct wavefront sensing without the need for image formation was explored for this work. A novel and promising technology called integrated photonics was used to accomplish this task. It allows the direct measurement of phase differences across the wavefront without the need to form images, using the principle of interferometry. This technology offers a low-consumption, miniaturised solution to astrophysical problems. Results. For this work a mathematical model was derived to characterise the behaviour of the proposed wavefront sensor. The proposed system was verified and simulated using a Python-based adaptive optics simulator. These simulations demonstrate the physical behaviour of the proposed wavefront sensor and highlight the factors that must be taken into account for its correct functioning.
0
0
astro-ph.IM 2026-05-12 Recognition

AI agent autonomously links gravitational-wave and EM events

An agentic framework for gravitational-wave counterpart association in the multi-messenger era

GW-Eyes combines large language models with domain tools to handle the coming flood of multi-messenger detections.

abstract click to expand
With the detection of gravitational waves (GWs), multi-messenger astronomy has opened a new window for advancing our understanding of astrophysics, dense matter, gravitation, and cosmology. The GW sources detected to date are from mergers of compact object binaries, which possess the potential to generate detectable electromagnetic (EM) counterparts. Searching for associations between GW signals and their EM counterparts is an essential step toward enabling subsequent multi-messenger studies. In the era of next-generation GW and EM detectors, the rapid increase in the number of events brings not only unprecedented scientific opportunities, but also substantial challenges to the existing data analysis paradigm. To help address these challenges, we develop GW-Eyes, an agentic framework powered by large language models (LLMs). For the first time, GW-Eyes integrates domain-specific tools and autonomously performs counterpart association tasks between GW and candidate EM events. It supports natural language interaction to assist human experts with auxiliary tasks such as catalog management, skymap visualization, and rapid verification. Our framework leverages the complex decision-making capabilities of LLMs and their traceable reasoning processes, offering a new perspective to the multi-messenger astronomy.
0
0
astro-ph.IM 2026-05-12 Recognition

Pipeline extracts Swift lightcurves with absorption-corrected fluxes

SAPLE: Swift Analysis Pipeline for Lightcurve Extraction

It automates UVOT and XRT processing to deliver magnitudes, corrected fluxes, and photon indices for point sources.

Figure from the paper full image
abstract click to expand
We present the Swift Analysis Pipeline for Lightcurve Extraction (SAPLE), a semi-automated pipeline to extract the Swift-UVOT and Swift-XRT data products and spectral information (magnitudes, photon indices, and fluxes) for a set of observations of any point source of interest. This pipeline is not meant to substitute, but to complement the tools the Swift team has already set up. Specifically, SAPLE provides a Swift-UVOT semi-automated pipeline that also returns the absorption corrected specific fluxes for any observation and filter of interest, a tool which to our knowledge is not publicly available to the community yet. Moreover, for Swift-XRT, SAPLE enables the user to extract a lightcurve of both flux and photon index (with associated uncertainties), assuming a redshifted powerlaw spectrum. The main codes are available through a GitHub repository (L. Marcotulli & N. Torres-Alb\`a 2026), and the following paper summarizes the main steps of the analysis.
0
0
astro-ph.IM 2026-05-12 Recognition

SiPM-ASIC pair resolves single photoelectrons in 1.7 ns

High-speed single-photoelectron detection for Cherenkov astronomy

The integrated hexagonal sensor and low-power CMOS ASIC show linear response to 130 photoelectrons while preserving Cherenkov pulse timing.

Figure from the paper full image
abstract click to expand
Silicon photomultipliers are increasingly replacing photomultiplier tubes in Cherenkov telescope cameras, but achieving single-photoelectron resolution with nanosecond timing in a low-noise, scalable detector system remains challenging. We present a co-designed SiPM sensor and front-end application specific integrated circuit (ASIC) that meets these requirements. The custom hexagonal sensor, developed with Hamamatsu Photonics, incorporates an integrated optical filter and fourfold pixel segmentation. The readout is performed by a second prototype of the FANSIC ASIC, optimized for this application and fabricated in 65~nm standard CMOS technology, it provides eight channels with on-chip analog summing of sub-channels on a $3.5\times 3.5~\mathrm{mm}^2$ die, while consuming only 24~mW per channel. We demonstrate clear single-photoelectron peak separation with a gain of $2.7 \times 10^{-12}~ \mathrm{V \cdot s}$ , and an impulse response below 4~ns full width at half maximum with a 1.7 ns rise time, preserving the nanosecond-scale structure of Cherenkov pulses. The system responds linearly from 1 to 130 photoelectrons, and 55 distinct photoelectron peaks are resolved by varying the source intensity. These results demonstrate that the integrated sensor-electronics architecture delivers the speed, resolution, and dynamic range required for imaging atmospheric Cherenkov telescopes, and provides a scalable path toward large-area camera modules.
0
0
astro-ph.IM 2026-05-12 Recognition

MXT optic calibration in orbit matches ground tests

Design and in-orbit calibration of the MXT optics

Celestial source data confirm the 1-degree optimized field of view and constant PSF predicted by PANTER measurements.

abstract click to expand
The Microchannel X-ray Telescope (MXT) is one of four instruments on the Space-based multi-band astronomical Variable Objects Monitor (SVOM) satellite mission, launched on the 22nd June 2024. The MXT is a narrow-field-optimised, lobster eye X-ray focusing telescope, consisting of an array of 25 square MPOs, with a focal length of 1.14 m and working in the energy band 0.2 - 10 keV. The design of the MXT optic (MOP) is optimised to give a 1 degree FoV to match the detector size, but the optic has the unique characteristics of a lobster eye design, with a wide FoV of 6 degree diameter, and a PSF, which is constant over the entire FoV. The MPOs on the Flight Module (FM) MOP have a pore size of 40 um giving the optimum thicknesses across the aperture of 2.4 mm in the centre and 1.2 mm at the edges. Using specific target sources, the in-orbit calibration of the optic is here described, and compared to the extensive on-ground calibration, which was carried out at the PANTER test facility, MPE, Germany. The design and limitations of the electron diverter, situated directly behind the optic, are also discussed.
0
0
astro-ph.IM 2026-05-12 2 theorems

Higher-order modes cut thermal correction power to 24-33% of Gaussian needs

Thermal Deformation Reduction in High-Power Interferometry with Higher-Order Laser Modes

LG2,2 and HG3,3 beams create more uniform test-mass heating, lowering the actuator power required at megawatt levels in future detectors.

Figure from the paper full image
abstract click to expand
Test-mass thermal noise is a limiting noise source for current and next-generation ground-based gravitational-wave observatories. Uniform-intensity higher-order laser beams, including Laguerre-Gaussian (LG) and Hermite-Gaussian (HG) modes, have been proposed as alternatives to the fundamental Gaussian beam due to their thermal-noise advantages. As interferometer power increases toward the megawatt regime, thermal aberrations from absorption in the test-mass coatings become increasingly significant. In this work, we quantify the robustness of higher-order modes against absorption-induced thermal deformation. We show that, under identical operating conditions, higher-order modes produce substantially more uniform thermal distortions than the fundamental mode, requiring significantly less thermal compensation power. The optimal curvature correction is reduced to 33% for the LG$_{2,2}$ mode and 24% for the HG$_{3,3}$ mode relative to the fundamental mode. We further show that the residual thermal deformation of higher-order modes results in lower optical loss, larger cavity power buildup, and improved modal purity in an aLIGO-like cavity. In addition, astigmatism compensation further enhances the intracavity purity of HG modes under self-heating-induced deformation. These results demonstrate that higher-order modes not only mitigate thermal noise but also intrinsically reduce beam self-heating effects, making them promising candidates for future high-power gravitational-wave interferometers.
0
0
astro-ph.IM 2026-05-12 Recognition

SETI searches for broadband leakage reach 100 parsecs

Isolating Broadband Radio Technosignatures (BRaTs): A Framework for Detecting Planetary-Scale Leakage

Wide-field arrays and VLBI follow-up isolate planetary-scale radio technosignatures that avoid Doppler issues.

abstract click to expand
The search for extraterrestrial intelligence (SETI) has traditionally focused on the detection of narrowband electromagnetic beacons. However, terrestrial technology is increasingly evolving toward distributed, low-power, wideband digital infrastructure. The strict adherence to narrowband filtering that characterises most SETI surveys, therefore, risks discarding the aggregate leakage signatures of advanced civilisations by systematically misclassifying them as unstructured noise. We investigate the feasibility of detecting such planetary-scale broadband radio technosignatures (BRaTs) using a hierarchical observational framework. In this tiered approach, wide-field radio surveys conducted by next-generation arrays (such as the SKA and its precursors) perform the initial deep-field observations, with targeted Very Long Baseline Interferometry (VLBI) providing the definitive, high-resolution follow-up. Because broadband continuum emission is largely insensitive to Doppler drift, long-duration "SETI Deep Fields" are observationally viable, extending the accessible detection volume for Kardashev Type I leakage to 100 pc. To distinguish these signals from other astrophysical confounders, a multi-parameter diagnostic framework is proposed. Candidate technosignatures are identified through a convergence of high brightness temperatures, negligible circular polarisation, spectral non-uniformity, interstellar scintillation, and sub-milliarcsecond astrometric co-motion with nearby Galactic stars/exoplanets.
0
0
astro-ph.IM 2026-05-12 Recognition

The paper presents practical techniques for implementing zero-order-hold transcriptions…

A practical guide to implementing zero-order-hold interplanetary trajectory legs

Forward-backward shooting and softmax time grids enable reliable performance across 28 problems in four dynamical models.

Figure from the paper full image
abstract click to expand
We study the practical implementation of zero-order-hold (ZOH) transcriptions for spacecraft trajectory optimisation, identifying a set of design principles that render them robust across a broad class of dynamical settings without problem-specific tuning. The contributions are fourfold: (i) a thorough study of the forward--backward shooting construction, denoted $\mathrm{ZOH}_\alpha$; (ii) a redundant four-dimensional throttle parameterization that eliminates the singularity of the control influence matrix along ballistic arcs; (iii) a softmax time-grid encoding that avoids ordering constraints on segment durations while preserving full differentiability; and (iv) the TOPS benchmark (Trajectory Optimisation Problems in Space), a suite of 28 problems spanning four dynamical models, two-body Cartesian, modified equinoctial elements, circular restricted three-body, and solar sailing, designed to be extended over time.
0
0
astro-ph.IM 2026-05-12 Recognition

6.5 m telescope to record 100 million galaxy redshifts

From Large Telescopes to the MUltiplexed Survey Telescope (MUST)

MUST uses 20,000 simultaneous fiber spectra over 5 square degrees for the largest 3D universe map in the 2030s

abstract click to expand
Recent advances in astronomical observations have ushered in an era of remarkable discoveries. We now probe the Universe through multi-messenger signals, image the sky with unprecedented depth and resolution, and investigate individual sources using powerful large-aperture telescopes. Yet, a critical gap persists: the lack of wide-field, highly multiplexed spectroscopic capabilities needed to fully exploit the wealth of imaging data from current and upcoming surveys. In this review, we trace the historical development of large optical telescopes and spectroscopic surveys, assess the capabilities of ongoing and near-future facilities, and motivate the need for next-generation Stage-V spectroscopic experiments. As a representative example, we present the MUltiplexed Survey Telescope (MUST), the first Stage-V spectroscopic facility currently under construction. MUST is a 6.5-meter telescope designed to obtain optical spectra for over 20,000 targets simultaneously within a $\sim$5 deg$^2$ field, using a modular focal plane populated with 6.2-mm pitch fiber-positioning robots. Over an 8-year survey in the 2030s, MUST aims to build the most comprehensive 3D spectroscopic map of the Universe to date, measuring redshifts for over 100 million galaxies and quasars and opening new windows into cosmology, Galactic structure, and time-domain astrophysics.
0
0
astro-ph.IM 2026-05-12 Recognition

RAYTHEIA ray tracer models PDRs at 512^3 resolution

RAYTHEIA: A high-performance ray-tracing algorithm for three-dimensional direction-dependent equations in astronomical simulations

The method combines dual grids and efficient traversal to deliver near-ideal parallel speed-up for three-dimensional chemistry simulations.

Figure from the paper full image
abstract click to expand
We present RAYTHEIA, a high-performance reverse ray-tracing algorithm designed to efficiently solve three-dimensional direction-dependent equations in astronomical simulations. The algorithm uses a dual-grid framework in which the native simulation mesh -- serving as the source grid for ray emission -- and an adaptive mesh refinement (AMR) Cartesian contribution grid are constructed for efficient ray-walking and contribution accumulation. The core of the algorithm integrates a leaf-only linear-octree data structure to reduce memory overhead, the digital differential analyzer (DDA) traversal method to efficiently determine the ray-walking path, Morton Code indexing to fast leaf cell lookup during traversal, and the slab method to analytically compute the path length. Furthermore, RAYTHEIA employs a hybrid (MPI/OpenMP) distributed parallel framework with a chunk-to-chunk communication strategy, achieving exceptional, near-ideal linear speed-up ratio and delivering high-end performance. We integrate RAYTHEIA with the 3D-PDR code to solve the complex chemistry and radiation transfer in photodissociation regions (PDRs). This allowed the modelling of three-dimensional PDR chemistry in a turbulent, star-forming cloud at an unprecedented resolution of $512^3$ grid cells. The algorithm demonstrates accuracy and convergence even at low angular resolutions. We further showcase the capabilities of RAYTHEIA by producing high-resolution synthetic emission maps of key diagnostic lines of a star-forming region capturing physical effects such as [O I] $63\mu$m self-absorption, measuring the [C I]-bright but CO-dark molecular gas, and deriving a CO-to-H$_2$ conversion factor in agreement with observations.
0
0
astro-ph.IM 2026-05-11 2 theorems

Everyday cameras measure eclipse shadow band patterns

Image Processing Framework for Eclipse Shadow Band Analysis

Framework applied to two eclipses finds statistically significant activity matching scintillation theory plus simultaneous orthogonal modes.

Figure from the paper full image
abstract click to expand
Eclipse shadow bands are transient intensity patterns that can appear on the ground near solar eclipse totality. This study presents a reusable image-processing framework for analyzing shadow-band video recordings collected with consumer-grade cameras. The framework quantifies band orientation, band prominence, and band power spectral density from video recordings. Applied to two eclipse datasets, the method detected statistically significant shadow-band activity during eclipse windows that align with the scintillation theory for shadow bands. The results also highlight simultaneous superimposed eclipse shadow band modes with orthogonal orientations. This demonstrates that consumer grade cameras can support quantitative analysis of shadow bands and may support future observational and atmospheric studies.
0
0
astro-ph.IM 2026-05-11 2 theorems

DRAO RFI monitor tracks 2 GHz bandwidth for transients

Wideband RFI Monitor Requirements, Design, and Commissioning at DRAO

After calibration and stability upgrades it serves as both real-time detector and long-term environment tool with a new gain drift method.

Figure from the paper full image
abstract click to expand
In this paper, we introduce the radio frequency interference monitor deployed at the Dominion Radio Astrophysical Observatory. It provides 2 GHz of instantaneous bandwidth, supporting channel bandwidths as fine as ~100 Hz for 1 s integrations, or integration times as low as ~50 ms for the standard 3.33 kHz channel bandwidth. After operating as a prototype instrument for several years, the monitor was commissioned to improve the calibration method, analog section temperature, and gain stability. It now operates both as a transient detector and as a long-term radio environment characterization tool. We introduce novel applications for the monitor and derive a new method for calculating the effect of gain drift on integrated data.
0
0
astro-ph.IM 2026-05-11 1 theorem

CT beats Dedner's cleaning for localized fields in MHD

Systematic Comparison between Constrained Transport and Mixed Divergence Cleaning Methods for Astrophysical Magnetohydrodynamic Simulations

Divergence cleaning artifacts appear when fields localize or timesteps shift; constrained transport avoids them.

Figure from the paper full image
abstract click to expand
Magnetohydrodynamic (MHD) simulations are indispensable research infrastructure in astrophysics today. In order to satisfy the solenoidal constraint of the MHD equations on discretized grids, modern simulation codes often employ either constrained transport (CT) with a staggered grid or divergence cleaning using an additional variable. We compare CT and Dedner's mixed divergence cleaning schemes systematically, and find that the divergence cleaning scheme can produce substantial artifacts in certain situations. Through numerical experiments including both idealized tests and practical applications, we show that the original implementation of Dedner's scheme becomes inaccurate when magnetic fields are strongly localized or when the timestep suddenly changes. We find that some previous results, such as the extremely rapid growth of magnetic fields during star formation in the early Universe, may be affected by the spurious behavior of the divergence cleaning scheme. We propose a few modifications to improve the robustness of the divergence cleaning method. Nevertheless, we find that the CT scheme is more accurate and reliable in many situations.
0
0
astro-ph.IM 2026-05-08 Recognition

Automated pipeline coadds HST spectra to raise signal and coverage

The Hubble Advanced Spectral Product (HASP) Program

HASP filters and stacks COS and STIS observations, regenerating products as calibrations improve to deliver higher-quality archival spectra.

Figure from the paper full image
abstract click to expand
The Hubble Advanced Spectral Products (HASP) program is designed to robustly coadd Cosmic Origins Spectrograph (COS) and Space Telescope Imaging Spectrograph (STIS) spectra within the Mikulski Archive for Space Telescopes (MAST) in an automated fashion such that coadds are available for new data or archival data with updated calibrations. For each target within a visit or program, HASP employs a meticulous multi-stage filtering process to ensure data quality and creates coadded products for all central wavelengths (CENWAVEs) within specific gratings, as well as combined products using different gratings and instruments. The project also emphasizes making the code accessible to the user community for custom coaddition. As calibrations improve and new data are added to the archive, HASP products are re-created automatically so that they represent the best reduction of a given visit or program. Automated coadditions like those achieved by HASP can significantly enhance the combination of different CENWAVES, increase signal-to-noise ratios, and increase wavelength coverage. These properties make HASP a vital resource for astronomers using archival spectroscopic data from HST.
0
0
astro-ph.IM 2026-05-08 2 theorems

Photo-z AI methods reach limit set by training data

Machine Learning Techniques for Astrophysics and Cosmology: Photometric Redshifts

Review concludes discriminative approaches have converged; further gains require better spectroscopic samples or generative Bayesian galaxy-

Figure from the paper full image
abstract click to expand
The cosmological redshift of a galaxy's light is inferable from its observable properties in images. Because imaging is much easier to acquire than spectroscopic observations that would allow the identification of distinct line features, this motivates the technique of photometric redshift estimation (photo-$z$). Photo-$z$ has been an early and sustained driver for the utilization of artificial intelligence (AI) in astrophysics, and conversely AI methods are underlying most of the recent advances in photo-$z$. Here we review the diversity of AI methods applied to the photo-$z$ problem over the years in a discriminative way, that is, to regress redshift from photometric observables. We argue that, besides optimization suiting specific applications, this approach has effectively converged. It is limited not by the AI methodology but by the size and substantial systematic uncertainties and selection effects in spectroscopic training samples. In order to progress, either an unobtainable quantity and quality of training data or a more principled approach in using it is required. We thus outline ongoing research of integrating AI in a Bayesian modeling of galaxy data. This comes in the form of generative models for representing the distribution of intrinsic properties and outcomes of telescope observations of the galaxy population.
0
0
astro-ph.IM 2026-05-08 Recognition

Pipeline upgrades sharpen FRB positions and polarization data

A PINK update: Improvements to the CELEBI fast radio burst data reduction and analysis pipeline

Astrometry, gating, and calibration changes raise accuracy and speed for the coming higher detection rate from the CRACO upgrade.

Figure from the paper full image
abstract click to expand
Fast radio bursts (FRBs) which are well localised ($<$1") to their host galaxy are tools for studying cosmology and the intergalactic medium. Furthermore, high-time resolution datasets of their polarisation properties can enable testing of the numerous models on their potential progenitors. To that end, the CELEBI (CRAFT Effortless Localisation and Enhanced Burst Inspection) pipeline was conceived to enable data reduction from raw antenna voltages to detect fast radio transient events, localise them to sub-arcsecond precision, and produce polarimetric data at time resolutions as fine as 3 ns. Here we present a slew of updates to the CELEBI pipeline. Improvements to the astrometry correction for FRB localisations has aided our ability to determine what part of a galaxy more nearby FRBs have occurred in, which can have its own implication on the progenitor. We also have implemented time and frequency gating on detected fast transients to enable a boost to signal-to-noise, particularly useful for high dispersion measure or faint fast radio transients. We give examples of our improvements to the localisation, including for the currently 'hostless' FRB 20251019A. The polarisation calibration process has been overhauled, resulting in much more accurate measurements of derived polarisation fraction and rotation measures. Furthermore, we now have incorporated tools for structure-maximisation of the dispersion measure of fast radio transients, a software container which enables the installation of CELEBI on other machines, and improved the pipeline efficiency. Together these updates (named 'Polarisation and astrometry Improvements for New Knowledge', or PINK) greatly improve our ability to keep up with the expected detection rate from the CRAFT COherent (CRACO) upgrade to the real-time fast transient detection system of the Australian SKA Pathfinder.
0
0
astro-ph.IM 2026-05-08

Shorter baselines suffice for LIFE exoplanet mission

A preliminary exploration of the effects of baseline length for the LIFE space mission

Simulations show 25-80m range or discrete baselines lose under 10% in habitable planet detection performance.

Figure from the paper full image
abstract click to expand
By aiming to find and characterise dozens of habitable exoplanets through the technique of nulling interferometry, the LIFE space mission will produce transformational science. One of the key parameters for such an interferometric mission is the nulling baseline length - the distance between nulled apertures, which past studies have assumed to be 10-100m. Advances in planet occurrence statistics and simulation tools allow us now to revisit this key assumption with significantly more detail, particularly with the intention to reduce the range of baselines considered due to mission implementation concerns. We utilise the LIFEsim mission simulator along with revised mathematical tools to identify whether the range of baselines could be reduced without significantly affecting planet yield and fringe tracking performance. Along the way, we also determine a new astrophysically motivated technique for choosing which baselines are optimal for a given science target. We find that indeed, LIFE could utilise a considerably shorter range of baselines, such as 25-80m, or even discrete baselines without much (<10%) loss of performance. Nevertheless, careful trade-offs between performance and implementation simplification must be made, especially considering any spectral weighting that may be required by the scientific goals, and the potential loss of target-specific baseline optimisation.
0
0
astro-ph.IM 2026-05-08

Follow-up selection need not be modeled for population inference

What You Don't Know Won't Hurt You: Self-Consistent Hierarchical Inference with Unknown Follow-up Selection Strategies

Hierarchical Bayesian methods recover the intrinsic astrophysical population accurately even when follow-up decisions correlate with unknown

Figure from the paper full image
abstract click to expand
Many astronomical surveys prompt follow-up observations, but the decision process through which candidates are selected for follow-up can be difficult to model. This poses a challenge when inferring properties of the intrinsic population of astrophysical sources, rather than those of the set of objects detected by the survey and often-incomplete follow-up observations. We alleviate this problem by demonstrating that explicitly modeling of the follow-up selection process is not required for self-consistent inference of the intrinsic population. Using the framework of hierarchical Bayesian inference, we show that the intrinsic population can be accurately inferred even when the decision to follow up candidates strongly correlates with latent parameters of interest. We provide several worked examples, showing that the precision of posterior constraints can depend on the follow-up process and that one may have to model a population of contaminants if the initial selection is imperfect. Our result could dramatically simplify population inference that incorporates uncoordinated follow-up from multiple observers triggered by the deluge of candidates from surveys like LSST, Gaia, and next-generation gravitational-wave interferometers.
0
0
astro-ph.IM 2026-05-08

MTM recovers Cn2 profiles and matches DIMM seeing

Atmospheric turbulence profiling with the Multistar Turbulence Monitor

Simulations with the HV model and three nights of Daocheng data confirm the inversion works under realistic noise

Figure from the paper full image
abstract click to expand
Accurate characterization of atmospheric optical turbulence is essential for evaluating astronomical sites and optimizing adaptive optics systems. The Multistar Turbulence Monitor (MTM) infers the vertical distribution of the refractive-index structure constant Cn2(z) from differential image motion measured between multiple stellar pairs in short-exposure frames. We present a comprehensive investigation of the MTM method, combining theoretical analysis, instrument-performance assessment, numerical simulations, and on-sky observations obtained at the Daocheng Astronomical Site. Simulations based on a standard HV turbulence model demonstrate that the inversion pipeline robustly recovers both the integrated seeing and the vertical turbulence profile under realistic centroiding noise and varying pixel scales. The Markov Chain Monte Carlo (MCMC) inversion achieves stable results with thirteen discrete height nodes and provides reliable uncertainties. Three nights of MTM measurements at the Daocheng Astronomical Site show that MTM-derived seeing closely tracks simultaneous Differential Image Motion Monitor (DIMM) results, accurately reproducing both short-term fluctuations and nightly averages. These results confirm that MTM provides a simple, portable, and versatile solution for atmospheric turbulence profiling and routine seeing monitoring.
0
0
astro-ph.IM 2026-05-08

Baikal-GVD reconstructs neutrino directions to 0.2Β° from muon tracks

Muon Track Reconstruction Procedures at the Baikal-GVD Neutrino Telescope

Track analysis methods applied to 2019-2021 data enable the Northern Hemisphere's largest neutrino detector to locate cosmic sources.

Figure from the paper full image
abstract click to expand
The Baikal-GVD neutrino telescope is the largest neutrino detector of its kind in the Northern Hemisphere. Muons produced in neutrino interaction in the vicinity of the detector leave track-like response in the detector allowing to reconstruct the neutrino arrival direction with the precision up to 0.2 degree. The Baikal-GVD collaboration has developed a variety of methods for the track-like event analysis. Methods for track-like event direction and energy reconstruction and neutrino cadidate event selection are discussed in this report. Preliminary results of application of analysis pipeline to the data-taking seasons from 2019 to 2021 are shown.
0
0
astro-ph.IM 2026-05-08

Accuracy and honesty diverge in LLM astronomical classifications

AstroAlertBench: Evaluating the Accuracy, Reasoning, and Honesty of Multimodal LLMs in Astronomical Classification

Benchmark on 1,500 real alerts finds top models often cannot reliably judge their own reasoning

Figure from the paper full image
abstract click to expand
Modern astronomical observatories generate a massive volume of multimodal data, creating a critical bottleneck for expert human review. While multimodal large language models (LLMs) have shown promise in interpreting complex visual and textual inputs, their ability to perform specialized scientific classification while providing interpretable reasoning remains understudied. We introduce AstroAlertBench, a comprehensive multimodal benchmark designed to evaluate LLM performance in astronomical event review along a three-stage logical chain: metadata grounding, scientific reasoning, and hierarchical classification over five categories. We use a pilot sample of 1,500 real-world alerts from the Zwicky Transient Facility (ZTF), a wide-field survey that scans the northern sky to detect transient astronomical events. On this dataset, we benchmark 13 frontier closed-source and open-weight LLMs that support visual input. Our results reveal that high accuracy does not always align with model ``honesty,'' defined as the ability to self-evaluate its reasoning, which affects its reliability as a real-world assistant. We further initialize a human-in-the-loop evaluation protocol as a precursor to future community-scale participation. Together, AstroAlertBench provides a framework for developing calibrated and interpretable astronomical assistants.
0
0
astro-ph.IM 2026-05-07

Digital whitening distorts radio telescope spectra

Systematic Spectral Distortion from Digital Whitening in Radio Telescopes and Implications for 21 cm Cosmology

The effect, confirmed in OVRO-LWA data, reaches levels that bias 21 cm cosmology but can be reduced by dithering or adjusted gain settings.

Figure from the paper full image
abstract click to expand
We identify a systematic distortion of the gain-vs.-frequency function of radio telescopes caused by digital flattening ("whitening") of the signal's spectrum followed by re-quantization, a common pair of processes in the signal processing of modern telescopes. Wide-bandwidth telescopes often have a large variation of signal power over frequency. Flattening of the spectrum allows samples of the channelized signal to be represented in a small number of bits, allowing efficient downstream processing. However, we show that this produces subtle systematic error in the measured spectra. We explore this effect in data from the Owens Valley Radio Observatory's Long Wavelength Array (OVRO-LWA) and through detailed semi-analytic simulations. Although the effect can be small so that it has heretofore been unrecognized, we demonstrate that it produces distortion of the spectrum at a level that is problematic for some science, in particular 21 cm cosmology. Finally, we explore mitigation strategies, showing that the effect can be substantially reduced by careful choice of the gain distribution along the signal path or by incorporating dithering in the re-quantization step.
0
0
astro-ph.IM 2026-05-07

Oxygen-tuned Al resonators measure nonlinear kinetic inductance

On-Chip Resonator for Nonlinear Kinetic Inductance Characterisation and Future Spectrometry Applications

DC and microwave tests on evaporated films match a kinetic model and supply design constraints for future mm-wave on-chip spectrometers.

Figure from the paper full image
abstract click to expand
This work focuses on the development and demonstration of tunable superconducting on-chip resonator, leveraging the intrinsic current-dependent non-linear kinetic inductance of superconducting aluminium, and investigating the effect of oxygen content. Thin films are deposited using standard metal evaporation. We present results from a comprehensive study based on a series of evaporated Al thin films. This research aims to inform and constrain optimisation strategies for the design of mm-wave on-chip spectrometers, particularly regarding yield, resolution, and efficiency. By systematically varying film stoichiometry, we use a series of DC measurements to extract fundamental film properties such as resistivity, critical current and critical temperature. Furthermore, we employ low-loss DC-coupled microwave resonators to characterise both their microwave properties and the non-linear kinetic inductance, comparing these findings to a determined non-linear kinetic model. Finally, we discuss the possibility of usage in a parametric amplifier.
0
0
astro-ph.IM 2026-05-07

No extraterrestrial signals found in decade of SETI observations

Results of ten years of UCLA SETI searches with the Green Bank Telescope

This sets an upper limit of less than 0.0063 percent of stars within 20,000 light years hosting detectable transmitters.

Figure from the paper full image
abstract click to expand
We have been conducting a search for narrowband radio signals with the L-band receiver (1.15-1.73 GHz) of the 100 m diameter Green Bank Telescope (Margot et al., 2023). So far, we have captured radio emissions from 70,000+ stars and planetary systems in the ~9 arcminute beam of the telescope. Our data-processing pipeline has a demonstrated 94%-99% efficiency for the detection of narrowband signals across the full range of frequency drift rates (+/-9 Hz/s). All 100 million candidate signals detected to date were either automatically (99.5%) or visually (0.5%) confirmed to be anthropogenic in nature. These results allow us to place stringent limits on transmitter prevalence: at the 95% confidence level, fewer than 6.3e-5 of stars within 20,000 ly host a transmitter that is detectable in our search (EIRP > 5e16 W). Our most interesting signals have been uploaded to a citizen science platform (http://arewealone.earth), where 40,000+ volunteers to date have contributed insights and classifications. We are using artificial intelligence (AI) to accelerate our search, automatically excise radio frequency interference, and improve signal detection. UCLA SETI research has involved ~200 undergraduate and ~20 graduate students so far.
0
0
astro-ph.IM 2026-05-07

STFT and image segmentation raise pulsar S/N after RFI cleaning

Computer Vision Methods for Frequency Analysis of RFI in Radio Astronomy Data

GBT data on PSR J1713+0747 shows higher recovered pulse signal-to-noise than Spectral Kurtosis baseline when each channel is segmented in ST

Figure from the paper full image
abstract click to expand
Radio Frequency Interference (RFI) increasingly contaminates the radio astronomy spectrum, often exceeding astronomical signal amplitudes by 50-70 dB. Reliable detection and mitigation are therefore essential for studies of faint transient phenomena such as pulsars and fast radio bursts (FRBs). Existing practical methods (including Spectral Kurtosis (SK), Median Absolute Deviation (MAD), and SumThreshold) perform well in many settings but depend on assumptions about the RFI environment and data statistics, limiting their effectiveness for weak, broadband, or non stationary interference. We develop a transform based RFI detection method that requires no prior knowledge of RFI origin or type. Using Green Bank Telescope (GBT) data containing PSR J1713+0747, with 4096 channels spanning 1.1-1.9 GHz and 5.12 micro second sampling, we apply a Short Time Fourier Transform (STFT) to each channel and use an image segmentation algorithm on the STFT magnitude to generate a binary RFI mask. The masked data are inverse transformed and reassembled into a cleaned time series. Performance is assessed using the Signal to Noise Ratio (S/N) of a single pulse of PSR J1713+0747, with SK serving as the baseline. The cleaned spectrogram is dedispersed, integrated across frequency, and evaluated through the resulting S/N. Experimental results show that refining each channel's frequency content via STFT, followed by segmentation in the STFT domain, yields measurable improvements in RFI suppression.
1 0
0
astro-ph.IM 2026-05-07

Self-organizing map sorts 1.5 million TESS light curves by variability

A useful representation of TESS light curves

Quantile graphs reduced by PCA and placed on the map group sources by amplitude, timescale, shape, and signal quality, with repeats staying

Figure from the paper full image
abstract click to expand
We present a simple and interpretable representation of TESS light curves designed for large-scale exploratory analysis. Our goal is not to optimize classification performance, but to construct a computationally efficient mapping in which proximity reflects meaningful similarity, without using labels or explicit period information as inputs. We represent each light curve using either quantile graphs or scattering transforms, reduce dimensionality with principal component analysis, and project the resulting features onto a self-organizing map (SOM). We evaluate ~1500 model configurations using a combination of standard embedding diagnostics and a light-curve-shape-based cohesion metric, and select a compact quantile-graph-based model that balances interpretability, stability, and performance. Applying the model to ~1.5 million TESS 2-minute cadence light curves, we find that the map organizes sources primarily by variability amplitude, signal-to-noise ratio, characteristic timescale, and light-curve shape. Repeat observations of the same stars show that most sources occupy stable and contiguous regions of the map, indicating that the representation captures persistent properties rather than noise and systematics. We provide an interactive web interface at http://tess-l8.space that enables inspection of nodes, nearest neighbors, and individual sources across sectors. The resulting representation serves as a practical tool for exploration, anomaly detection, and dataset characterization, and illustrates how simple, deterministic encodings can yield useful structure in large astronomical time-series datasets.
0
0
astro-ph.IM 2026-05-07

Lensed supernova detector reaches 60% true positives by seventh observation

HOLISMOKES XXI: Detecting strongly lensed type Ia supernovae from time series of multi-band LSST-like imaging data -- Part II

Realistic simulations with PSF variations and foreground contaminants show the time-series classifier identifies candidates for prompt LSST

Figure from the paper full image
abstract click to expand
Strong gravitationally lensed supernovae (LSNe) are rare but extremely valuable probes of cosmology and astrophysics. Prompt identification within the alert streams of time-domain surveys such as the Rubin Legacy Survey of Space and Time (LSST) is essential for timely follow-up observations. In our previous study, Bag et al. (2026), we introduced a deep-learning framework for detecting LSNe Ia directly from multi-band, multi-epoch image cutouts. The model employs a convolutional LSTM architecture to capture spatiotemporal correlations in time-series imaging data, enabling classification updates as new observations arrive. In this work, we extend that framework by incorporating greater realism into the simulations. In particular, we present a method to construct realistic image time series from single-epoch observations by introducing epoch-to-epoch point spread function variations with corresponding variance-map corrections. The dataset is based on HSC PDR3 observations and includes simulated lensed host-galaxy arcs, SN light-curve variations, and Poisson noise. We also introduce an additional negative class consisting of SN Ia occurring in the foreground lens galaxy, representing a challenging source of false positives. Despite these additional complexities, the model retains strong performance. The receiver operating characteristic improves rapidly during the first few observations, reaching a true-positive rate of $\sim60\%$ at a false-positive rate of $\mathcal{O}(10^{-4})$ by the seventh observation and $\sim80\%$ by the tenth. We also investigate potential confusion with sibling SNe occurring in LRGs and identify the configurations that best mimic lensed systems. These results demonstrate that the image-time-series approach remains robust under more realistic observing conditions, and is well suited for real-time LSN searches in LSST and other time-domain surveys.
0
0
astro-ph.IM 2026-05-07

Compact telescope detects first cosmic-ray air showers

First Detection of Extensive Air Showers Using a Small-Aperture Fluorescence Telescope

A 25 cm fluorescence instrument records over 15 shower tracks at high altitude, showing small optics can observe ultra-high-energy cosmicray

Figure from the paper full image
abstract click to expand
We report on the successful detection of extensive air showers (EAS) generated by ultra-high-energy cosmic rays using a small-aperture fluorescence telescope (FT) deployed at the Mount Aragats high-altitude research station. The instrument is equipped with a 25 cm diameter Fresnel lens and operates with a 2.625 $\mu$s time resolution. To our knowledge, this represents the first-ever observation of EAS achieved with an FT of such a compact aperture. To isolate shower events from the observational data, we implemented two independent event selection pipelines: a conventional cut-based analysis and a deep learning approach utilizing neural networks. Both algorithms successfully identified over 15 high-confidence EAS tracks from data acquired during clear, moonless nights. We present selected event topologies and detail the background rejection methodology employed to discriminate true shower tracks from spurious focal-plane signals mimicking EAS signatures. These results provide an important proof-of-concept for the advancement of fluorescence detection techniques, demonstrating their viability for forthcoming ground-based and space-borne missions. Future efforts will focus on primary energy reconstruction utilizing a previously developed neural-network framework.
0
0
astro-ph.IM 2026-05-07

Ng21CMA reveals sawtooth spectral structure in SKA channelization

The Next-Generation 21CMA Telescope: Design, Commissioning, and Instrumental Effects in an SKA-LFAA-Like System

Upgraded telescope data shows two-stage channelization creates periodic ripples that affect spectral measurements.

Figure from the paper full image
abstract click to expand
As the Square Kilometre Array (SKA) approaches operational status, its complex digital architecture introduces new instrumental challenges. To explore relevant observational and data processing strategies, we have upgraded the 21CMA telescope to the Next-Generation 21CMA (Ng21CMA). This paper presents the design and commissioning of the Ng21CMA system, featuring a digital backend capable of real-time beamforming. We demonstrate its performance through interferometric observations and high-time-resolution pulsar measurements, validating the system's sensitivity and operational stability. As a representative example of instrumental effects accessible with this platform, we investigate the impact of the two-stage channelization strategy used in SKA-LFAA-like systems. We show that it introduces a sawtooth-like spectral structure (SLOSS), characterized using both simulations and observational data. These results provide useful references for understanding instrument-induced spectral features and for guiding system design and calibration in future large-scale aperture arrays.
0
0
astro-ph.IM 2026-05-06 3 theorems

Hubble archive delivers coadded spectra for every target

Overview of the New Hubble Spectroscopic Legacy Archive

Automatic combination of all COS and STIS observations over instrument lifetimes creates ready-to-use spectra with metadata.

Figure from the paper full image
abstract click to expand
The new Hubble Spectroscopic Legacy Archive (HSLA) provides coadded spectra of individual targets that have been observed with the Cosmic Origins Spectrograph (COS) and the Space Telescope Imaging Spectrograph (STIS) over their operating lifetime. HSLA uses data available in the Mikulski Archive for Space Telescopes (MAST). It automatically produces coadds whenever new data become publicly available or when there is newly recalibrated data. HSLA defines individual targets by their associated coordinates, accounting for proper motions, and uses SIMBAD, NED and the Phase II observing proposals to obtain astronomical classifications for each object. Coadded spectra are produced for each observing mode. In the case of COS far-ultraviolet observations there is one coadded spectrum for each lifetime position (LP). Additionally, a spectrum spanning the entire wavelength range covered by the observations is produced by abutting the spectra from a selection of individual modes. For each individual target, HSLA also provides a human-readable metadata file with key information that can be used in searches or for further exploration of the data. The HSLA project also makes the code used for coadding spectra publicly available along with several other tools (using Jupyter notebooks) for custom coaddition required in special cases. In this report we will describe the main components of HSLA and provide a brief description of how the data and metadata can be accessed.
0
0
astro-ph.IM 2026-05-06 2 theorems

ALMA upgrades to test gravity near black holes

Shaping the future of Global Interferometric Arrays: Imaging Strong Gravity and Magnetic Fields

Higher sensitivity and multi-frequency VLBI could tighten constraints on general relativity and clarify jet launching mechanisms.

abstract click to expand
The observational validation of General Relativity (GR) has been propelled in recent years by recent breakthroughs in Very Long Baseline Interferometry (VLBI) augmented by ALMA. We explore ALMA2040 opportunities to transform these studies through greatly improved sensitivity and a multi-frequency approach. The focus will be on placing most stringent constraints on GR and alternative theories in the strong-gravity regime, and on understanding the formation and launching of relativistic jets.
0
0
astro-ph.IM 2026-05-06 Recognition

PySME v1.0 prunes weak lines to scale stellar spectrum modeling to surveys

PySME v1.0: improved modelling of stellar spectra for survey-scale applications

Opacity-based selection and refined hydrogen treatment let precise abundance work handle large survey datasets efficiently.

Figure from the paper full image
abstract click to expand
Stellar abundance analysis relies on flexible, high-performance spectral synthesis. To meet these needs, we present PySME v1.0, an updated Python implementation of Spectroscopy Made Easy (SME) designed for precise and survey-scale modelling of stellar spectra.A central challenge in SME based synthesis is the efficient treatment of very large line lists, including both the preselection of negligible lines and the subsequent formal synthesis. PySME v1.0 introduces a revised line-selection framework based on opacity ratio and line depth, together with dynamic line list construction and control of the effective wavelength span over which each line contributes to the synthetic spectrum. These workflows support parallel preprocessing of weak-line selection and reduce the line list passed to the synthesis core, thereby improving scalability while preserving synthetic accuracy. PySME v1.0 also incorporates an updated equation-of-state treatment that improves the modelling of hydrogen lines, particularly Balmer features, while maintaining close agreement with previous SME results for metal lines. The Python interface has further been extended to support parameter-dependent derived quantities updated during optimisation, and PySME provides non-local thermodynamic equilibrium (NLTE) departure-coefficient grids for 17 elements. Together, these developments establish PySME v1.0 as a robust and efficient framework for high-precision stellar abundance analyses in large spectroscopic surveys.
0
0
astro-ph.IM 2026-05-06

PySME v1.0 scales stellar spectrum synthesis for large surveys

PySME v1.0: improved modelling of stellar spectra for survey-scale applications

Revised line filtering and an updated gas model cut computation while keeping hydrogen and metal line accuracy intact.

Figure from the paper full image
abstract click to expand
Stellar abundance analysis relies on flexible, high-performance spectral synthesis. To meet these needs, we present PySME v1.0, an updated Python implementation of Spectroscopy Made Easy (SME) designed for precise and survey-scale modelling of stellar spectra.A central challenge in SME based synthesis is the efficient treatment of very large line lists, including both the preselection of negligible lines and the subsequent formal synthesis. PySME v1.0 introduces a revised line-selection framework based on opacity ratio and line depth, together with dynamic line list construction and control of the effective wavelength span over which each line contributes to the synthetic spectrum. These workflows support parallel preprocessing of weak-line selection and reduce the line list passed to the synthesis core, thereby improving scalability while preserving synthetic accuracy. PySME v1.0 also incorporates an updated equation-of-state treatment that improves the modelling of hydrogen lines, particularly Balmer features, while maintaining close agreement with previous SME results for metal lines. The Python interface has further been extended to support parameter-dependent derived quantities updated during optimisation, and PySME provides non-local thermodynamic equilibrium (NLTE) departure-coefficient grids for 17 elements. Together, these developments establish PySME v1.0 as a robust and efficient framework for high-precision stellar abundance analyses in large spectroscopic surveys.
0
0
astro-ph.IM 2026-05-06

The paper derives new equal-area weighted multipole basis functions for 2D image analysis…

Multipole Functions for Image Analysis II: Equal Area Weighting and Application to Supernova Remnant Images

Equal-area multipole analysis of supernova remnant images finds more radial structure in X-rays than radio, with core-collapse remnants…

abstract click to expand
New basis functions for 2 dimensional (2D) image analysis with a circular boundary (referred to as multipole analysis) are derived which are equal-area weighted. We present open access Python code hosted by GitHub, with which users can apply the multipole analysis to images. The new multipole analysis is applied to a set of 28 supernova remnants (SNRs) which are selected to have both radio and X-ray images, and have been identified as Type Ia or Type CC. Each pair of SNR images (radio and X-ray) was convolved to the same spatial resolution prior to analysis. The resulting multipole radial powers and angular powers, from order 0 to 5, for a given SNR are different for different multipoles and for a given multipole are different between X-ray and radio images. The X-ray radial powers (for orders >0) are larger on average than the radio radial powers (more radial structure in X-rays than radio). The angular powers are smaller than the radial powers on average (more radial structure than angular structure). Comparing Type Ia and Type CC populations, the radial powers (for orders >0) are on average larger for Type CC than Type Ia for X-ray and radio images, with larger difference for X-ray images. The angular powers (for orders >0) are similar between Type Ia and Type CC for both radio and X-ray images.
0
0
astro-ph.IM 2026-05-06

Daily monitoring tracks Gaia data quality over mission life

The First Look of Gaia: Daily data quality and instrument health assessment with automated early warnings

Limited astrometry combined with calibrations and diagnostics enables early warnings and long-term trend detection.

Figure from the paper full image
abstract click to expand
The ESA Gaia mission is a 10+ year astrometric whole-sky scan, demanding consistent data quality over the whole timespan of operations Aims. The Gaia First Look (FL) is a system whose aim is monitoring the data quality to identify problems, which includes early warning capabilities for potential upcoming issues. Methods. In order to achieve its goals, the Gaia FL implemented its own limited astrometric solution, and used the daily calibrations from other segments of the Data Processing and Analysis Consortium (DPAC), as well as the diagnostic data from the satellite itself, in order to obtain a complete picture of the situation of the Gaia satellite on a daily basis. This led to a short-term health and data quality check, but also to a broader overview of the longer-term trends and evolutions within the payload. Potential issues that were encountered were reported to other groups within DPAC for further analysis purposes. When required, ways to mitigate the problems were discussed, and implemented. Results. We show a number of findings by the Gaia FL concerning longer-term evolution, individual but common effects, as well as detrimental impacts, all of which occurred over the operational phase of the Gaia mission
0
0
astro-ph.IM 2026-05-06

GPU solvers compute self-gravity for star formation at competitive speeds

Iterative Poisson Solvers for Self-gravity with the GPU Code Astaroth

Iterative Poisson methods in Cartesian and spherical coordinates match existing algorithm performance on structured grids.

Figure from the paper full image
abstract click to expand
We present the development and benchmarking of Poisson solvers for graphics processing units (GPUs). Implemented in the Astaroth platform, the solvers feature high computational efficiency. We present novel combinations of discretizations and smoothers and document practical and performance-focused implementations aimed at reducing time-to-solution for self-gravitating systems. We describe the solver architectures and validate their accuracy against known analytic solutions. We measure convergence and timing per iteration for various solver algorithms, including conjugate gradient, successive overrelaxation, and multigrid in Cartesian coordinates, along with biconjugate gradient stabilized in spherical coordinates. We also couple the solvers to the Astaroth hydrodynamics to simulate a classic time-dependent problem in star formation, measuring accuracy and time-to-solution, for self-gravity on three-dimensional structured grids. Our results demonstrate that the solvers achieve performance similar to other algorithms implemented in Astaroth, and provide a solid foundation for integration into production-scale astrophysical simulations.
0
0
astro-ph.IM 2026-05-06

Forward models fix jitter bias in binary star measurements

Mitigating effects of telescope jitter through differentiable forward-modeling

Differentiable simulations recover accurate separations unless a one-dimensional jitter model is wrongly applied to two-dimensional motion.

Figure from the paper full image
abstract click to expand
Instabilities in telescope pointing, commonly referred to as jitter, introduce image degradation that can compromise the accuracy of critical scientific observables. This work presents a differentiable forward-modeling approach to both understand and mitigate the impact of jitter. We apply dLux -- a differentiable optical simulation framework built in the JAX numerical simulation framework -- to model the blurring effects of jitter on the final image. We categorize jitter into low-, medium-, and high-frequency regimes with respect to the camera frame rate and build simple jitter models based on its manifestation on the detector. The forward-model approach proves effective for low- and high-frequency regimes, but the inherent unpredictability of medium-frequency jitter may lead to model misspecification. As a test case we apply these models to the TOLIMAN mission, a forthcoming CubeSat telescope dedicated to detecting nearby Earth-analogue exoplanets through high-precision astrometry. Using Fisher information analysis, we quantify the effect of jitter on TOLIMAN's primary science observable -- the angular binary separation of the Alpha Centauri AB binary components. We find model misspecification does not introduce a systematic bias on the recovered binary separation except when fitting a one-dimensional jitter model to a two-dimensional motion, hence we recommend the use of a two-dimensional model. The forward-model approach offers a generalized method applicable to other telescope systems, including ongoing work with JWST's NIRISS instrument. This approach represents a significant step toward delivering higher accuracy measurements at modern observatories as demands on precision continue to rise.
0
0
astro-ph.IM 2026-05-06

AI model detects satellite streaks in telescope images at 94% precision

StreakMind: AI detection and analysis of satellite streaks in astronomical images with automated database integration

StreakMind reconstructs streak geometry, matches objects to orbits, and logs results for surveys and space monitoring.

Figure from the paper full image
abstract click to expand
Artificial satellites and space debris increasingly contaminate astronomical images, affecting scientific surveys and producing large volumes of streaked exposures. Manual inspection is no longer feasible at scale, and reliable detection and characterisation of streaks has become essential for both data-quality control and the monitoring of objects in Earth orbit. We present StreakMind, an automated pipeline designed to detect Near-Earth Objects and satellite streaks in astronomical images, characterise their geometry, and cross-identify them with known orbital objects. The system integrates all inference results into a structured database suitable for large surveys. A YOLO OBB model was trained on a hybrid dataset of 2335 images and applied to processed FITS frames. Geometric refinement, inter-frame association, satellite cross-identification, and Gaussian-based confidence scoring were then used to produce final identifications stored in a relational database. Observations from La Sagra Observatory were used to develop and test the method. On the test set, the model achieved a precision of 94 percent and a recall of 97 percent. It reliably detected faint streaks, delivered consistent geometric reconstructions, and performed robust satellite cross-identification. StreakMind demonstrates strong potential for large-scale automated analysis of linear streaks produced by both Near-Earth Objects and artificial satellites, contributing to space situational awareness.
0
0
astro-ph.IM 2026-05-05

NASA astrophysics fleet of 1-2B missions can complement flagships

Foundations for Discovery: A Coordinated Fleet Approach to NASA Astrophysics

The coordinated approach targets Astro2020 gaps through smaller missions that work together and leverage external partnerships.

Figure from the paper full image
abstract click to expand
This white paper presents an analysis of Astro2020 science priorities and NASA's future astrophysics mission architecture, advocating for a coordinated fleet of \$1--2B missions, smaller than typical Flagship observatories, but strategically designed to complement them, i.e. a ``Next Generation Great Observatories" program. The study addresses opportunities in current mission planning, design, and implementation and proposes a strategic approach to maximize scientific return on investment while strengthening partnerships across NASA divisions, other government organizations, universities, and industry.
0
0
astro-ph.IM 2026-05-05

CASA shows users can guide software priorities without extra costs

Role of General Users in the Lifecycle of Scientific Software

Radio astronomy tool's experience reveals ways to balance internal deadlines with general user needs using limited support resources.

Figure from the paper full image
abstract click to expand
In science, the lifecycle of software products is typically managed with limited resources while facing unlimited demand. Scientific software requirements are necessarily often dominated by internal project specifications and deadlines, but these internal priorities, while beneficial for the community as a whole, do not always align with the individual needs of our ultimate customers: general users. For software products to have the broadest reach, ideally the general user community should be involved in all aspects of the data lifecycle, but reality is that user expectations need to be managed. Based on the lifecycle of the Common Astronomy Software Applications for radio astronomy (CASA), we will show avenues for software teams to interact with general users, even when facing limited resources for user support. We will discuss how involvement of users and user groups in prioritizing software development can benefit both the user community and the software teams. The contents of these proceedings were presented at the 35th conference on Astronomical Data Analysis Software & Systems (ADASS XXXV).
0
0
astro-ph.IM 2026-05-05

Linear algorithm segments high-activity regions in irregular time series

PDRS : A Linear mathcal{O}(N) Algorithm for Segmentation of High-Activity Regions in Irregularly Sampled Time Series

PDRS identifies candidate transients from large datasets like ZTF in O(N) time, matching Bayesian Blocks quality at far lower cost.

Figure from the paper full image
abstract click to expand
Identifying transient high-activity episodes in astronomical time series requires partitioning data into regions of distinct statistical behavior. A widely adopted approach combines Bayesian Blocks with a hill-climbing procedure to isolate high-activity regions, but carries $\mathcal{O}(N^2)$ complexity -- a scalability challenge for wide-field surveys like ZTF and the upcoming Rubin Observatory (LSST), where light curves routinely contain thousands of irregularly sampled observations. We present Peak-Driven Region Segmentation (PDRS), a linear-time $\mathcal{O}(N)$ algorithm for rapid extraction of high-activity regions in irregularly sampled data. PDRS seeds candidate regions at statistically significant local maxima and expands them via a gradient-aware multi-source breadth-first search. Saddle-point merging and a median-based filter suppress spurious detections. Functioning as a computationally efficient pre-processing stage, PDRS isolates candidate transient events for downstream analysis. We demonstrate its efficacy on quasar light curves from SDSS Stripe~82 and AGN light curves from ZTF DR23, showing that PDRS identifies candidate high-activity regions comparable to those from Bayesian Blocks at substantially reduced cost. Its domain-agnostic formulation and physically interpretable parameters make PDRS broadly applicable beyond astronomy, including biomedical signals, seismic recordings, and industrial sensor monitoring.
0
0
astro-ph.IM 2026-05-05 2 theorems

EST multi-aperture mode cuts seeing errors for sharper solar images

Operating the Fabry-P\'erot systems of the European Solar Telescope in multi-aperture mode

Segmenting the 4.2 m aperture into six 1.4 m subapertures enables reliable post-processing restoration before MCAO is ready.

Figure from the paper full image
abstract click to expand
We discuss how to optimise the science output of the European Solar Telescope (EST), when used without the wide-field compensation for high-altitude seeing that the EST multi conjugate adaptive optics (MCAO) will offer. This will be the mode of operating EST during its first year(s). Without MCAO, the spatial resolution of a much smaller telescope could surpass that of EST. We therefore propose to operate EST in multi-aperture mode, by optically segmenting the 4.2 m aperture into six 1.4 m subapertures, until MCAO is operational. Operating at smaller aperture diameter pushes down the root mean square wavefront errors from the high altitude seeing to levels that can more reliably be compensated for in restored images using post processing methods. This will significantly improve image quality. In particular, the multi-aperture mode will provide the sustained stable high image quality needed for obtaining time sequences of spectropolarimetric data. The multi-aperture mode is implemented with low-cost modifications of the camera lenses of the three Fabry-Perot systems that will be used to cover the wavelength range 380--860~nm. Switching between the full-aperture and multi-aperture modes can be done quickly and independently for the three FPI systems. This allows flexible optimisation of EST, taking into account that the seeing is much better at long wavelengths than at short wavelengths, without any impact on the EST primary or secondary optical systems or on the actual FPI systems. The multi-aperture addition to EST provides a powerful and flexible option that has the potential of significantly improving the quality and amount of its science data before MCAO is operational. In this publication, we perform simulations and image reconstructions of simulated data to demonstrate the benefits of the multi-aperture option, and provide a simple optical design to demonstrate its feasibility.
0
0
astro-ph.IM 2026-05-05 2 theorems

80 cm telescope reaches J=19.4 depth in 30-minute stacks

Design, Testing, and Commissioning of the Sun Yat-sen University (SYSU) 80 cm Infrared Telescope

Background-limited J-band performance with millimagnitude precision supports observations of transients and high-redshift objects from 4100m

Figure from the paper full image
abstract click to expand
The Sun Yat-sen University (SYSU) 80 cm telescope is a new generation near-infrared (NIR) facility in China dedicated to time-domain astronomy, while also serving as a testbed for emerging NIR cameras. Commissioned in October 2024 at the 4100 m Lenghu site on the Tibetan Plateau in China, the telescope adopts a reflective Cassegrain design with two Nasmyth foci for J and K bands. The J band imaging system, initially equipped with a 640 x 512 off-the-shelf InGaAs camera (INS Mars640) and upgraded in June 2025 to a 1280 x 1024 science-grade, deeply cooled camera (YNAOIR), achieves background-limited performance with a dark current of ~ 14 e-/s/pix and a readout noise of ~ 11 e-. The system reaches a limiting magnitude of J ~ 17 mag (Vega system) in single 20 s exposures and depths of J ~ 19.4 mag with stacked 30 minute exposures. For a variable with J ~ 14 mag during on-sky tests, the system delivers millimagnitude-level photometric precision. Since commissioning, the telescope observed transients such as gamma-ray bursts (GRBs), supernovae and comets, variables including active galactic nuclei (AGNs), high-redshift quasars (z > 6), and brown dwarfs, as well as deep-field imaging reaching J ~ 20.5 mag. This validates the feasibility of using InGaAs cameras for astronomical observations, encouraging other institutions to develop dedicated infrared telescopes or integrate infrared cameras into existing optical telescopes.
0
0
astro-ph.IM 2026-05-04

Two-group split makes life detection feasible with small surveys

The Catastrophic Consequences of Agnosticism for Life Searches and a Possible Workaround

By forcing confounder rates to be identical across groups that differ in life probability, the method yields strong evidence in 24 percent.

Figure from the paper full image
abstract click to expand
Planned and ongoing searches for life, both biological and technological, confront an epistemic barrier concerning false positives - namely, that we don't know what we don't know. The most defensible and agnostic approach is to adopt diffuse (uninformative) priors, not only for the prevalence of life, but also for the prevalence of confounders. We evaluate the resulting Bayes factors between the null and life hypotheses for an idealized experiment with $N_{pos}$ positive labels (biosignature detections) among $N_{tot}$ targets with various priors. Using diffuse priors, the consequences are catastrophic for life detection, requiring at least ${\sim}10^4$ (for some priors ${\sim}10^{13}$) surveyed targets to ever obtain "strong evidence" for life. Accordingly, an HWO-scale survey with $N_{tot}{\sim}25$ would have no prospect of achieving this goal. A previously suggested workaround is to forgo the agnostic confounder prior, by asserting some upper limit on it for example, but we find that the results can be highly sensitive to this choice - as well as difficult to justify. Instead, we suggest a novel solution that retains agnosticism: by dividing the sample into two groups for which the prevalence of life differs, but the confounder rate is global. We show that a $N_{tot}=24$ survey could expect 24% of possible outcomes to produce strong life detections with this strategy, rising to $\geq50$% for $N_{tot}\geq76$. However, AB-testing introduces its own unique challenges to survey design, requiring two groups with differing life prevalence rates (ideally greatly so) but a global confounder rate.
0
0
astro-ph.IM 2026-05-04

Alignments on 1950s plates point to constant Earth longitudes

Statistically Significant Linear Alignments Among High-Confidence Transient Candidates on POSS-I Photographic Plates

Seven groups of point sources project to fixed longitudes with high significance, before any satellites existed.

Figure from the paper full image
abstract click to expand
I report the detection of statistically significant linear alignments and anomalous spatial clustering among high-confidence transient candidates in the VASCO catalog of vanishing sources on Palomar Observatory Sky Survey (POSS-I) photographic plates (1949-1957). A machine learning classifier scores 107,875 candidates by their likelihood of being genuine transients. Searching the 36,215 candidates with probability >= 0.50 for collinear groupings narrower than 3 arcsec, I find 7 plates with alignments of 5-8 sources that exceed Monte Carlo expectations (p < 0.03, 10,000 iterations). The aligned sources are point-like, not streaks, which rules out any continuously luminous object crossing the field during the 45-minute exposures. The implied angular rates (1-15 arcsec/s) overlap with the geosynchronous regime but are inconsistent with low or medium Earth orbits, and no artificial satellites existed during the POSS-I era. When I project each alignment onto Earth's surface assuming a high-altitude object, 6 of 7 maintain constant geographic longitude with sub-degree spread (combined p ~ 3e-10). Four of these cluster near -96 deg longitude (central United States); one falls within 0.3 deg of the longitude of the Hanford nuclear production site on a nuclear test window date. Close pairs (< 30 arcsec) occur at 16.2x the random rate, and the nights with alignments are the same nights with excess close pairs (Fisher exact p < 0.0001). Plate artifacts cluster near the ecliptic plane (26%), but high-confidence transients are depleted there (16%; chi-square test p = 3.3e-82), which rules out asteroids, comets, and zodiacal debris as the dominant source. No transient reappears at the same sky position on a different night. All of these transients predate Sputnik 1.
0
0
astro-ph.IM 2026-05-04

Geostationary source calibrates CMB polarization to 0.1 degrees

Status of the COSmological Microwave Observations CALibrator

The COSMOCal project builds an orbiting emitter at 90, 150 and 270 GHz to give telescopes a precise reference for polarization measurements.

abstract click to expand
As the sensitivity of CMB telescopes increases, the need for precise calibration becomes critical. Started in 2022, the COSMOCal project aims to place an artificial polarized source in geostationary orbit, which will serve as a reference for CMB telescopes. This source will emit at 90, 150 and 270 GHz and will be linearly polarized with a highly precise orientation smaller than 0.1 deg. This proceeding presents the scientific motivations for the project, the current status of the development of the instrument and the results of a calibration campaign performed in March 2026 at the Institut d'Astrophysique Spatiale.
0
0
astro-ph.IM 2026-05-04

Outer Space Treaty lacks enforcement against rising space pressures

The Outer Space Treaty Won't Save Us From Ourselves

Goodwill-based rules face unsustainable public and private demands, with environmental recognition proposed as the basis for new stewardship

abstract click to expand
The rapid growth of human activities in outer space sounds urgent alarms around ethical and philosophical issues, particularly concerning space militarization. The present international legal framework governing activities in space, the Outer Space Treaty (OST), views the peaceful exploration of space for scientific research as co-equal to other 'uses' entitled to "due regard" with respect to "potentially harmful interference" on the part of other space actors. The OST is deeply aspirational but has weak enforcement mechanisms, relying at its core on the goodwill of all involved parties as the fundamental basis for accountability. But that framework now faces unsustainable pressures from both public and private interests, and current agreements like the OST may be unable to exert timely, material protections. Terrestrial frameworks of "ethics of deterrence" versus the "ethics of agreements" are quickly expanding into cosmic environments. We argue for the legal recognition of space as an environment as the basis of any future approach to securing its integrity, and share examples of agreements grounded in peaceful cooperative stewardship of shared environments. These represent potential pathways forward that are ethical and also serve rational self-interest and self-preservation at this crucial juncture for humanity.
0
0
astro-ph.IM 2026-05-04

Cut astronomy grad admissions to two letters and April 1 deadline

Recommendations for the Astronomy Graduate Admissions Process

Uniform short materials and synchronized timelines ease the load on applicants and programs.

abstract click to expand
As the AAS Working Group on Graduate Admissions (WGGA) we are sharing brief recommendations for improving and standardizing key elements of the graduate admissions process in astronomy. Most astronomy graduate programs have large areas of overlap in their admissions processes; however, the existing small variations in requirements and mismatches in communication and transparency make admissions more challenging for students and programs alike. To improve this situation, and building on the work presented in the AAS Graduate Admissions Task Force (GATF) report we recommend a few simple and straightforward changes for application content, communication, and timelines. These include an application format that consists of 1) two 500-word recommendation letters, 2) one 1500-word application essay, 3) an applicant CV, and 4) unofficial transcripts; and an admissions timeline that includes effective and transparent communication from programs and encouraging an April 1st "down-select date" for applicants.
0
0
astro-ph.IM 2026-05-04

Hybrid mirror cuts gravitational-wave coating noise tenfold

Beyond Bragg-Mirrors for Gravitational Wave Telescopes: A Fabrication Tolerant Hybrid Metasurface-Bragg Mirror Design

A tolerant one-layer metasurface plus seven Bragg pairs meets ET-Pathfinder reflectance specs with far lower thermal displacement noise.

Figure from the paper full image
abstract click to expand
Coating thermal noise in high-reflectivity test-mass mirrors is a major limitation for future gravitational-wave detectors, especially in the 10--300 Hz band. ET-Pathfinder therefore requires mirror coatings that combine very high reflectance at 1.55 micrometer with low thermal noise under cryogenic conditions. Conventional dielectric Bragg mirrors provide high reflectance but require thick coatings, whereas metasurface mirrors can reduce coating-related noise but are limited by fabrication tolerances and line-edge roughness. We present a hybrid metasurface--Bragg mirror concept tailored to ET-Pathfinder. The design combines a fabrication-tolerant one-layer metasurface, an anti-resonant Fabry--Perot spacer, and a reduced dielectric Bragg stack. Optical performance is evaluated using full-wave electromagnetic simulations, while fabrication robustness is assessed with a truncated-Gaussian Monte Carlo analysis. Line-edge roughness is included as a systematic edge-smoothing effect. The resulting reflectance distributions are used to determine the minimum Bragg-stack support required to meet system-level specifications. The ideal metasurface exceeds 99.999% reflectance. When fabrication uncertainties and line-edge roughness are included, the metasurface reflectance is limited to about 99.9% at the 95% yield level. The remaining transmission can be compensated by a supporting Bragg stack with as few as seven layer pairs. For this configuration, the hybrid mirror achieves a total thermal displacement noise about one order of magnitude below the projected ET-Pathfinder coating-noise budget. These results show that fabrication-limited metasurface reflectance can be compensated within a hybrid architecture, enabling reduced coating thickness and thermal noise for next-generation gravitational-wave detectors.
0
0
astro-ph.IM 2026-05-04

Neural net flags gravitational-wave candidates in the mass gap

Training a neural network to rapidly identify candidate gravitational-wave events in the lower mass gap

Model achieves 9 percent average error on mass-gap probability for O4a events by reading chirp mass alone

Figure from the paper full image
abstract click to expand
The physics governing the boundary between the most massive neutron stars (NSs) and the least massive black holes (BHs) is currently uncertain, but could potentially be constrained with new observations. While NSs have been observed with masses up to $\sim2~M_{\odot}$, there is a dearth of electromagnetic observations of compact objects in the $\sim2-5~M_{\odot}$ range, known as the lower mass gap. Recent observations of gravitational-wave (GW) signals from binary mergers detected by the LIGO-Virgo-KAGRA (LVK) collaboration indicate that this gap is likely not empty. Rapidly distinguishing whether a candidate GW event has components in this purported mass gap can indicate the likelihood of a detectable electromagnetic counterpart, and thus inform decisions for follow-up observations. In this work we train a neural network model, GWSkyNet-MassGap, that simultaneously predicts the probability that a candidate merger has a component in the lower mass gap ($P_{\mathrm{MassGap}}$) and the probability that it involves a NS ($P_{\mathrm{NS}}$). We find that the model is able to infer information about the source chirp mass to predict $P_{\mathrm{MassGap}}$ and $P_{\mathrm{NS}}$, leading to correct predictions for high-mass mergers with $\mathcal{M}_c\gtrsim15~M_{\odot}$, but less accurate predictions for lower-mass systems which require knowledge of the binary mass ratio to break the mass degeneracy. For candidate events in the first part of LVK's fourth observing run (O4a), the model has a mean prediction error of 9% for $P_{\mathrm{MassGap}}$ and 6% for $P_{\mathrm{NS}}$. The model could be further developed to rapidly predict the source chirp mass for candidate events in future observing runs.
0
0
astro-ph.IM 2026-05-01

Cryogenic alpha exposure tests KIDs at 62% of expected L2 dose

Radiation Total Dose for PRIMA: Cold Exposure with Alpha Particles

The method keeps detectors cold to replicate space conditions and measures performance changes after controlled irradiation.

Figure from the paper full image
abstract click to expand
The Probe far-Infrared Mission for Astrophysics (PRIMA) is a far-infrared (24-261 micron wavelengths) probe-class space observatory currently under Phase A study, which promises orders-of-magnitude improvement in mapping speed over its predecessors. PRIMA will field exquisitely sensitive kilopixel arrays of kinetic inductance detectors (KIDs) for the Far-Infrared Enhanced Survey Spectrometer (FIRESS) instrument. PRIMA will orbit in space at the Sun-Earth L2 point, where Planck found the energetic particle flux to be about 300/min/cm2. Thus, the possible effect of a high fluence of energetic particles on the detector sensitivity must be characterized. Previous work has suggested that bombardment of KIDs by ions can reduce the quasiparticle lifetime (Barends et. al. 2009), but the conditions of the experiment were not representative of a detector which is continuously held at sub-Kelvin temperatures in the energetic particle environment of L2 orbit. To better replicate the damage which would be produced by energetic particles in this environment, we developed a fully cryogenic irradiation experiment in which a stepper motor controls a screen which can block or reveal an alpha particle emitter. This setup can be used to irradiate aluminum KID arrays fabricated for FIRESS to well-controlled dose levels. In this work, we calculate the damage dose expected for a 5-year mission in L2 orbit, and we irradiate an array to approximately 62 percent of this level. Before and after irradiation, we measure the quasiparticle lifetimes, resonant frequencies, and quality factors of the detectors.
0
0
astro-ph.IM 2026-05-01

LAST pipeline detects transients at 80% efficiency and 90% purity

The Large Array Survey Telescope-Pipeline. II. Image Subtraction and Transient Detection

ZOGY subtraction plus fixed filters reach 5-sigma depth of 20.3-20.7 mag and deliver clean candidates to the Transient Name Server without a

Figure from the paper full image
abstract click to expand
Context. The Large Array Survey Telescope (LAST) is a wide-field visual-band survey designed to explore the variable and transient sky with high cadence. Its raw data stream is automatically processed in near real time at the observatory site, producing science-quality images, catalogs, and transient alerts. Transient alerts are then reported to the Transient Name Server (TNS). Aims. The LAST pipeline comprises two major components: (i) processing and calibration of single images followed by coaddition of $20\times20$s exposures, producing single-image and coadded-image catalogs; and (ii) subtraction of coadded images from calibrated reference images followed by transient detection. In this work we present a detailed description and validation of the second component of the pipeline. Methods. Transient detection is based on the algorithm for proper image subtraction (ZOGY). We combine ZOGY subtraction with the Translient statistic for sub-pixel motion discrimination, together with a sequence of deterministic filtering steps, to produce a clean stream of transient candidates without the use of machine learning. Results. Using commissioning data, the pipeline achieves a preliminary $5\sigma$ limiting magnitude of $20.3$-$20.7$mag, a single-epoch transient detection efficiency of $\sim80$%, and a purity of $\gtrsim90$% at signal-to-noise ratio of $\geq7.5\sigma$.
0
0
astro-ph.IM 2026-05-01

Emulator speeds cosmic-ray radio reconstruction to milliseconds

Radio signal generation in milliseconds: enabling multi-parameter reconstruction of ultra-high-energy cosmic rays

This allows MCMC fitting to recover electromagnetic energy at 8.9 percent resolution and arrival direction at 0.08 degrees.

abstract click to expand
In recent years, radio detection of ultra-high-energy cosmic rays (UHECRs), with energies above $10^{18}$ eV, has become an established technique. The radio emissions can be simulated with high accuracy using Monte Carlo codes such as ZHAireS and CoREAS. These simulations are essential but are computationally intensive. In this work, we present a machine-learning-based emulator that reproduces radio signal simulations with high accuracy in milliseconds rather than hours. Primary particle properties can then be reconstructed by comparing measured signals to emulated traces using a Markov Chain Monte Carlo approach. Using ZHAireS simulations carried out over the GRANDProto300 experiment layout, the method achieves an 8.9\% resolution on electromagnetic energy and a 0.08{\deg} angular resolution, matching state-of-the-art reconstruction performance. Finally, we apply the method on real data, successfully reconstructing cosmic-ray candidates detected by the GP300 prototype.
0
0
astro-ph.IM 2026-05-01

DeepSpaceYoloDataset adds test2026 split for diverse evaluation

An Extended Evaluation Split for DeepSpaceYoloDataset

The new split supplies varied images to test YOLO detectors of deep sky objects used with smart telescopes.

Figure from the paper full image
abstract click to expand
Recent technological advances in astronomy, particularly the growing popularity of smart telescopes for the general public, make it possible to develop highly effective detection solutions that are accessible to a wide audience, rather than being reserved for major scientific observatories. Published in 2023, DeepSpaceYoloDataset is a collection of annotated images created to train YOLO-based models for detecting Deep Sky Objects, particularly suited for Electronically Assisted Astronomy. In this paper, we present an update to DeepSpaceYoloDataset with the addition of a new split, test2026, designed to evaluate detection models with a greater diversity of images.
0
0
astro-ph.IM 2026-05-01

Blended light curves flag lensed supernovae at 0.22% false positive rate

Finding Strongly Lensed Supernovae from Blended Light Curves

Fitting two delayed components to single ZTF curves yields one candidate in 445 normal supernovae for future large surveys.

Figure from the paper full image
abstract click to expand
We present a model-independent, photometry-only framework for identifying strongly lensed supernovae when multiple images are unresolved and blended into a single point source. Building on the simulation-based methodology of Bag et al. (2021), we apply this approach to real Zwicky Transient Facility (ZTF) data using a validation sample of spectroscopically confirmed Type Ia supernovae. The method models the observed flux as a superposition of two time-shifted components, and Bayesian inference is used to estimate the relative scaling and time delay. Applying this framework to 445 well-converged supernovae, we find that only a single object satisfies the selection criteria when adopting a conservative threshold of $\Delta t \ge 12$ days, corresponding to a false positive fraction of $1/445 \approx 0.22\%$. A laxer threshold of $\Delta t \ge 10$ days yields fourteen objects, for a false positive fraction of $3.15\%$. The method provides a scalable and model-independent first-stage filter for identifying lens-like candidates in large time-domain surveys such as the Rubin Observatory's Legacy Survey of Space and Time (LSST).
2 0
0
astro-ph.IM 2026-05-01

Pairwise PN method always shrinks binaries near black holes

A benchmark for binary star interaction with a supermassive black hole in general relativity

Tests show statistical agreement with perturbation schemes only for million-solar-mass black holes, while pairwise PN reduces separation at

Figure from the paper full image
abstract click to expand
Most galaxies have supermassive black holes (SMBH) at their centres, surrounded by stars with binary systems also present in this environment. We use two schemes - post-Newtonian (PN) and a scalar perturbation to a background metric to numerically solve the three-body problem of a binary with a SMBH. We test three different PN formulations for the PN scheme: The Einstein-Infeld-Hoffman equation, pair-wise implementation of two-body PN-terms for three bodies and the Arnowitt-Deser-Misner Hamiltonian. We compare these approaches for one million solar mass and one billion solar mass black holes, and find a statistical match between the two approximations for stellar mass binary interacting with a million solar mass black hole. We also perform a statistical study for encounters with this black hole, and find that the higher order PN formulation matches with metric-with-perturbation scheme. However, we find a decrease in separation of the binary, and eccentricity variations between different schemes around the billion solar mass black hole. This behaviour is not present if binary has a large separation or is further away from the black hole due to decreased general-relativistic effects. We find that the pair-wise PN method results in a decrease in separation at pericentre in all test cases irrespective of the distance from the black hole or mass of the black hole, making this the least reliable method for solving this problem. Our work highlights the need for caution when interpreting the results in different formulations around SMBHs. This also shows that when understanding extreme mass ratio inspirals (EMRIs) using simulations, one should beware as the binary gets closer to the black hole.
0
0
astro-ph.IM 2026-04-30

Bayesian sampling recovers 21cm power spectrum from intensity maps

Bayesian component separation and power spectrum estimation for 21 cm intensity mapping data cubes

The method separates the signal at map level and keeps the estimate within 2 sigma of truth even with strong foregrounds and missing RFI-fl

Figure from the paper full image
abstract click to expand
Foreground removal remains an ongoing challenge in radio cosmology, and increasingly sensitive experiments necessitate more robust analysis techniques. In this work, we model simulated data from a single-dish intensity mapping experiment, and use the Gibbs sampling and Gaussian constrained realisation (GCR) techniques to draw samples from the posterior probability distribution of the model parameters. This allows for a separation of the foregrounds and 21 cm signal at the map level, as well as recovery of the 1-dimensional HI power spectrum to within statistical uncertainties. Despite the model consisting of over 2 million free parameters in the example presented here, these methods allow us to sample from the Bayesian posterior at a rate of $<30$ seconds per iteration. This framework is also resilient to frequency channel flagging (e.g. due to RFI excision), with the GCR steps effectively in-painting the missing data with statistically-consistent model realisations. The power spectrum is recovered accurately in the presence of strong foreground contamination and RFI flagging -- the estimate falling within $2\sigma$ of the true model in our example, similar to the commonly-used transfer function correction method. Statistical realisations of foreground and HI maps are also recovered, with associated uncertainties available from the full joint posterior distribution of all parameters.
0
0
astro-ph.IM 2026-04-30

IQUEYE gains tenfold sensitivity on Gemini 8.1m telescope

IQUEYE at Gemini South: instrument, science commission, and first results

Precise 0.5-nanosecond photon timing now feasible for studying faint millisecond pulsars and fast radio bursts.

Figure from the paper full image
abstract click to expand
The Italian quantum eye (IQUEYE) is a fast photon counter based on the single photon avalanche diode detectors and capable of preserving a ~0.5 ns/h accuracy photon time of arrival. IQUEYE was originally developed for intensity interferometry experiments, but now its scientific scope has been extended towards ultra fast astronomy, including optical pulsars, millisecond pulsars and the enigmatic fast radio bursts. IQUEYE's capabilities are mainly restricted by the number of photons detected, a quantity that scales with the collector size of an optical telescope. Through the visitor instrument program at Gemini South (Cerro Pach\'on, Chile) we brought IQUEYE to the 8.1-m dish, reaching an order magnitude sensitivity increased from previous operations. At Gemini South we installed IQUEYE to observe giant pulse emitters, millisecond pulsars, and transitional millisecond pulsars for over 40 hours in the span of a week.
0
0
astro-ph.IM 2026-04-30

Beam-waist and shift parameters cut far-field WFE by 24 percent

Analytical Modeling of Far-Field Wavefront Error with Beam-Waist and Lateral-Shift Effects in Spaceborne Laser Interferometry

The extended Nijboer-Zernike model shows how q = 0.8 and a 2-micron shift yield TTL coupling near the 0.1 pm/nrad requirement for space GW

Figure from the paper full image
abstract click to expand
The coupling between far-field wavefront error (WFE) and laser pointing jitter is an important source of tilt-to-length (TTL) noise in spaceborne laser interferometric links. We extend the Nijboer--Zernike analytical model for far-field WFE of truncated Gaussian beams by incorporating two practical initial-condition parameters, the beam-waist-to-aperture ratio $q$ and the normalized lateral spot-shift ratio $s_r$, to account for realistic beam truncation and alignment conditions. Based on this model, we analyze the influence of $q$ on far-field WFE in addition to the conventional received-power trade-off, showing that decreasing $q$ from 1 to 0.9 and from 0.9 to 0.8 reduces the mean far-field WFE by approximately 10\% and 14\%, respectively, in Monte Carlo simulations of random initial aberrations. We also derive the direct contribution of lateral spot shift and its coupling with transmitted WFE (constrained to $\lambda/20$). For the normalized lateral spot-shift ratio $s_r$, a $2~\mu\mathrm{m}$ entrance-pupil displacement in a Taiji-like telescope corresponds to $s_r=0.001$ and produces a phase-angle coupling coefficient of about $0.0892~\mathrm{pm/nrad}$, close to the typical far-field TTL requirement $0.1~\mathrm{pm/nrad}$, while the spot-shift--aberration coupling terms are much smaller and can be neglected in practical tolerance estimation. These results provide a theoretical basis for beam-parameter optimization and alignment tolerance design in future space-based gravitational-wave detection missions.
0
0
astro-ph.IM 2026-04-29

Sech profile fits Roman detector charge diffusion better than Gaussian

Charge diffusion and modulation transfer function in a Nancy Grace Roman Space Telescope detector

Laser speckle data give a 0.328-pixel width per axis with no wavelength dependence from 850 to 2000 nm for weak lensing corrections.

Figure from the paper full image
abstract click to expand
The Nancy Grace Roman Space Telescope (Roman) is an observatory motivated by the search to understand dark energy, exoplanets, and general astrophysics. Roman will bring unprecedented amounts of precision to weak gravitational lensing measurements, which necessitates an improved understanding of instrumental signatures in star and galaxy images. One feature is the modulation transfer function (MTF), which includes contributions from charge diffusion in Roman's infrared detector arrays. As part of the detector characterization effort, a detector from the flight lots (but ultimately not selected for flight) was illuminated with a laser speckle pattern. We present an analysis of the laser speckle data, including MTF measurements in several wavelengths. We fit several models for the charge diffusion profile, including: (i) a Gaussian profile; (ii) a hyperbolic secant (sech) profile; and (iii) a general drift-diffusion model that includes the Gaussian and sech as limiting cases. We find that the sech model produces an acceptable fit with no need for the additional parameter and is strongly preferred over the Gaussian. The standard deviation per axis of the sech profile is $0.3279^{+0.0043}_{-0.0042}$(stat)$\pm0.0093$(sys) pixels, with the systematic error dominated by non-linearities. We find no detectable wavelength dependence over the range from 850--2000 nm. The model informs survey strategy for weak lensing measurements and has been included in simulations used to develop the data processing pipelines for the Roman mission.
0
0
astro-ph.IM 2026-04-29

Segment drifts degrade HiCAT contrast by factor of 2.5

Impact of segmented deformable mirrors on high-contrast testbeds for exoplanet imaging with future large space telescopes: contrast stability assessment on the HiCAT bench

Lab tests and simulations link sub-nanometer misalignments to observed performance, emphasizing cophasing needs for future exoplanet misss

abstract click to expand
We investigate the stability of a segmented deformable mirror (DM) on high-contrast testbeds and its impact on the images produced with coronagraphs. Segmented apertures are promising to obtain large primary mirrors for future missions with starlight suppression capabilities. Cophased at the sub-nanometer level, segments can be slightly misaligned by small drifts, proving harmful for exoplanet observations. We study the impact of misalignments on contrast using the High-contrast Imager for Complex Aperture Telescopes (HiCAT), a testbed which includes a 37-segment DM and produces coronagraphic images with 2.5e-8 contrast in narrowband light. Temporal wavefront errors due to the segmented DM are estimated with a Zernike wavefront sensor. Our in-lab results show aberrations at the sub-nanometer level, proving encouraging for contrast stability studies. We then use a digital twin of HiCAT to simulate coronagraphic images with an initial 0.5e-8 contrast and the segments in flat position. By injecting known perturbations on the segments, we observe a contrast degradation by a factor of 2.5, nearly corresponding to the typical contrast observed on HiCAT. These results highlight the importance of segment cophasing sensing and control strategies to ensure the required contrasts for exo-Earth imaging with a large segmented aperture for the Habitable Worlds Observatory mission.
0
0
astro-ph.IM 2026-04-29

Platform integrates alert streams for real-time multi-messenger follow-up

Enabling real-time multi-messenger follow-up of transient events with Astro-COLIBRI

Astro-COLIBRI monitors streams, applies filters, and provides context plus observing conditions through web and mobile interfaces.

abstract click to expand
Time-domain astrophysics is a rapidly growing field focused on the study of transient phenomena such as Gamma-Ray Bursts (GRBs), Fast Radio Bursts (FRBs), supernovae, novae, and AGN flares. Their characterization increasingly relies on a multi-messenger and multi-wavelength approach, combining gravitational waves, high-energy neutrinos, and electromagnetic observations across the spectrum. Such a coordinated strategy requires efficient information sharing and thus tools capable of rapidly compiling and contextualizing key data for each new event. We present Astro-COLIBRI, a well-established platform designed to meet this challenge. Astro-COLIBRI combines a public RESTful API, real-time databases, and a cloud-based alert system. It continuously listens to multiple alert streams, applies user-defined filters, and places each event in its multi-messenger and multi-wavelength context. Through its user-friendly interfaces, including a web application and mobile apps for iOS and Android, the platform provides clear data visualization as well as concise summaries of key event properties and observing conditions for user-defined locations.
0
0
astro-ph.IM 2026-04-28

Joint multiband fit cuts scatter in WISE photometry

Joint Multiband Photometry with crowdsource

By sharing source positions across bands, crowdsource produces more consistent fluxes and locations than independent processing.

Figure from the paper full image
abstract click to expand
We present a new multiband extension to the crowdsource photometric pipeline, enabling simultaneous fitting across multiple imaging bands in crowded fields. The core idea is that multiple images of the same part of the sky should have the same sources at the same locations; only the fluxes in the different images should be allowed to vary in fitting. The framework also allows us to use all images of a given region to detect faint sources, with configurable weighting among the different bandpasses as appropriate for different source spectra. Similar concepts are already present in other crowded field packages like DAOPHOT and DOLPHOT; we now include it in the crowdsource fitting approach. We describe the mathematical formulation of the multiband fit and demonstrate its performance using the Wide-field Infrared Survey Explorer (WISE) W1 and W2 imaging as a concrete application. The multiband algorithm improves flux consistency and reduces band-to-band positional scatter relative to independent-band fitting. We test the method on unWISE coadded tiles spanning both sparse and crowded regions and quantify improvements in photometric agreement and astrometric stability. This framework provides a general foundation for future multiband crowded-field catalogs.
0
0
astro-ph.IM 2026-04-28

Two stations cut lunar seismic noise by factor 2.3 at 0.3 Hz

Seismic background mitigation with the Lunar Gravitational-wave Antenna

Optimal spacing in an isotropic field lets array processing suppress background for Moon-based gravitational-wave antennas.

Figure from the paper full image
abstract click to expand
Lunar gravitational-wave (GW) detectors relying on the measurement of the response of the Moon to GWs are susceptible to a seismic background, which might pose a fundamental sensitivity limitation. The Lunar Gravitational-wave Antenna (LGWA) was conceived as an array of accelerometers with the idea that data can be processed to distinguish between a GW signal and the seismic background. As a result, the seismic noise of the GW measurement would be mitigated. However, so far, no quantitative assessment of the mitigation of the seismic background has been provided. In this article, we derive the analytical expressions for the optimal squared signal-to-noise ratio considering two seismic stations in an isotropic, random, Gaussian seismic field. Our numerical analysis reveals that the capacity to mitigate the seismic noise critically depends on the distance between the two stations relative to the seismic-correlation length. We demonstrate that optimal placement of the two stations can yield significant improvements in the equivalent seismic noise amplitude spectrum density (ASD), approximately a factor of 2.3 at 0.3 Hz, compared to the measurement with a single station. The equivalent ASD of the seismic noise also exhibits distinct oscillatory and mitigation features arising from the Bessel-function structure of the noise correlation.
0
0
astro-ph.IM 2026-04-28

Augmentations boost galaxy models but gains fade with bigger data

The effects of image augmentations when training machine learning models in astronomy

Tests on 230,000 galaxy images show that extra training variations help less once datasets grow large enough to fill model capacity.

Figure from the paper full image
abstract click to expand
We measure the influence of image augmentations and training dataset size when training a deep neural network to classify galaxy morphology. Data augmentation is an integral step when training machine learning models and often astronomers add augmentations assuming they will always improve the performance of their models. We train multiple versions of the same pre-existing Zoobot model using different image augmentations and different dataset sizes from 230,000 galaxy images from Galaxy Zoo DECaLS to determine whether this assumption is necessarily true. We find that generally, the addition of image augmentations does improve a deep neural network's performance, however, this improvement is significantly diminished as the training dataset size increases. The choice of specific augmentations (provided they are sensible) does not seem to be as important as simply having augmentations as different augmentations result in similar increases in performances. We find that for a model of a given size, there exists a saturation point (when the model's capacity has been filled with data) that cannot be surpassed with data augmentations. We find that more complex augmentations result in longer training times and might not lead to improved performance. If augmentations are added to the training process (which is recommended), simpler augmentations might be sufficient, depending on the size of the dataset and model. We therefore encourage astronomers to carefully consider their use of image augmentations in an effort to reduce wasted time and computational resources.
0
0
astro-ph.IM 2026-04-28

Archive supplies calibrated CRIRES+ L and M band spectra

An archive of reduced and telluric-corrected CRIRES+ L- and M-band spectra with slit-tilt and wavelength calibrations

Telluric absorption fits replace missing lamps, delivering 5649 reduced nod-pair spectra from 156 targets for immediate use.

Figure from the paper full image
abstract click to expand
The high-resolution near-infrared spectrograph CRIRES+ at ESO VLT covers the Y, J, H, K, L and M bands. The U-Ne and Fabry-Perot calibration light sources, however, only work up to the K-band, leaving the bands L and M without wavelength calibration, and without a way to measure the inclination of the long slit relative to the detector frame. To remedy this, we present here a uniformly reprocessed archive of all public CRIRES+ L/M science observations obtained between September 2021 and March 2025, totalling 11 131 raw frames. We use the telluric modelling tool viper that fits a model to the plethora of atmospheric absorption features that exist around these wavelengths. We calibrate the slit tilt from the wavelength solutions for the nodding A and B frames that have the target in the lower and upper half of the slit, respectively. We then update the static inputs to the data reduction system with the slit tilt information and reduce the data with the standard pipeline recipes. Subsequently, we derive new wavelength scales for each observation from telluric fits on the spectra themselves, additionally interpolating the solutions for spectra that have no tellurics from the ones that have. The resulting 5649 extracted, calibrated and telluric-fitted AB nod-pair spectra, spanning 156 unique targets from 68 ESO programmes, are served through an interactive web archive at https://www.astro.uu.se/crires-lm that offers data downloads and figures for all datasets that allow an initial judgement of the data quality.
0
0
astro-ph.IM 2026-04-28

Polynomial optimization yields three-order accuracy gain in angles-only IROD

Robust Angles-Only Initial Relative Orbit Determination Using Polynomial Optimization

Reduced-order weighting removes line-of-sight singularity and adds robustness to noise and poor initialization while easing later refinement

abstract click to expand
This paper develops a robust angles-only IROD method based on polynomial optimization for arbitrary nonlinear dynamics. First, the relative motion is approximated by high-order Taylor polynomials within the differential algebra framework, and the resulting cross-product-residual minimization problem is solved through a recursive polynomial optimization procedure. Second, a reduced-order weighting strategy is introduced by projecting the residual onto the two-dimensional tangent subspace of the line of sight, thereby structurally removing the intrinsic singularity of conventional three-dimensional weighting. Third, a zero-solution-avoidance constraint together with an adaptive threshold-selection mechanism is developed to improve robustness against poor initialization, strong measurement noise, and unfavorable observation geometries. Numerical simulations show that the proposed method improves IROD accuracy by about three orders of magnitude relative to the baseline methods, while also reducing the downstream orbit-refinement burden. The reduced-order weighting strategy further improves accuracy by about 43% in the nominal case and remains stable under large-noise conditions, outperforming the conventional three-dimensional weighting by about 81%.
0
0
astro-ph.IM 2026-04-28

TANSPEC pipeline now reduces all slits in LR and XD modes

pyTANSPEC v1.0 and HxRGproc: Updated packages to Clean and Reduce TANSPEC data

Template matching improves wavelength precision and flux calibration is added while every slit width becomes usable.

abstract click to expand
TIFR-ARIES Near-Infrared Spectrometer (TANSPEC) is a spectrograph-cum-imager operating over the wavelength range $0.55 - 2.5~\mu$m. The instrument is mounted on the 3.6-m Devasthal Optical Telescope (3.6-m DOT). It offers two resolution modes: Low Resolution (LR) with $R\sim100-350$ and Cross-Dispersed (XD) via various slits of different widths (0.5", 0.75", 1.0", 1.5", 2.0" and 4.0"). The LR mode provides a resolving power ($R$) of $\sim 100-350$, while the XD mode achieves $R\sim2500$ using the 0.5" slit. The previous version of the data reduction pipeline supported only wavelength-calibrated XD mode spectra and was limited to two slits (S-0.5 and S-1.0). In this work, we present an upgraded version of pyTANSPEC. The upgraded pipeline not only improves the data extraction algorithm but also introduces several new features for users. It now enables the reduction of spectra from all available slits for both LR and XD modes. The upgraded version also implements a template-matching method for more precise wavelength calibration. Additionally, a step for flux calibration is also included. Alongside pyTANSPEC, we upgraded HxRGproc, a Python package for cleaning and generating slope images from Non-Destructive Readout (NDR) frames taken with H1RG and H2RG detectors. The package performs non-linearity correction, flags saturated pixels, removes pink noise, and eliminates cosmic ray events. HxRGproc is updated to work for the H2RG detector of TANSPEC and is set up on the TANSPEC server, ensuring users receive data that are pre-cleaned and non-linearity corrected.
0
0
astro-ph.IM 2026-04-28

C-GFT telescope meets SVOM GRB follow-up design goals

SVOM/C-GFT: Instrumentation and Performances on the SVOM Alerts

After one year the 1.2-m instrument with dual cameras shows reliable rapid response to satellite alerts.

Figure from the paper full image
abstract click to expand
The Chinese Ground Follow-up Telescope (C-GFT) is an optical facility upgraded to support the Space Variable Objects Monitor mission (\textit{SVOM}). Located at the Jilin Observation Station, it is capable of rapidly identifying and monitoring the optical counterparts of Gamma-Ray Bursts (GRBs). The 1.2-m telescope is equipped with two switchable focal-plane instruments: the prime-focus wide-field LATIOS camera and the Cassegrain-focus three-channel CATCH camera. In this paper, we present a system overview, including the observatory, the telescope, the instrumentation, the automated operational framework managed by the Operations Center, and the data processing pipelines. We also report the performance results obtained during over one year of \textit{SVOM}'s post-launch operations. The results demonstrate that the system meets its design specifications and delivers robust observational and operational performance.
0
0
astro-ph.IM 2026-04-28

Ground pipelines flag optical afterglows for many SVOM GRB triggers

SVOM/VT: On-ground processing of VT-VHF data

First-year operations show the VT-VHF system identifies candidates in a significant fraction of ECLAIRs events with available data.

Figure from the paper full image
abstract click to expand
The VT--VHF data comprise three types of onboard-processed data results generated from four sequential observational sequences and transmitted to the ground via a Very High Frequency (VHF) downlink. On the ground, these data are processed by three successive pipelines: the pre-processing pipeline, the VT--VHF data processing pipeline (VVPP), and the VT afterglow candidate pipeline (VTAC). These pipelines perform packet decoding, astrometric and photometric calibration, and afterglow candidate identification, respectively. This paper describes the architecture and operational implementation of the VT--VHF ground processing system and assesses its end-to-end performance using the first year of SVOM operations. These data enable rapid identification of GRB optical counterparts. Early detections, while the source is still optically bright, facilitate spectroscopic redshift measurements. Dual-band colors provide preliminary redshift constraints and help identify high-redshift candidates, whereas non-detections in both bands may indicate very high redshift, significant extinction, or intrinsically dark bursts. In-orbit operations show that the VT--VHF ground processing system successfully identifies optical afterglow candidates for a significant fraction of ECLAIRs triggers with available VT--VHF data, demonstrating its robustness and readiness.
0
0
astro-ph.IM 2026-04-28

The paper describes an onboard data processing pipeline for the SVOM Visible Telescope…

SVOM/VT: Real-Time Onboard Data Processing

SVOM/VT onboard pipeline processes images in real time to deliver VHF data for 78% of slewed GRBs and identify optical counterparts in 56%…

Figure from the paper full image
abstract click to expand
The SVOM Visible Telescope (VT) is critical for the rapid identification of gamma-ray burst (GRB) optical counterparts, particularly for high-redshift candidates that require immediate infrared spectroscopic follow-up. To address the stringent bandwidth constraints of the VHF downlink while ensuring real-time data availability, we developed the VT Onboard Data Processing Pipeline (VOPP).This paper details the software architecture, algorithms, and hardware implementation of VOPP using an FPGA and a CPU. The pipeline performs essential real-time tasks, including image quality assessment, dark and flat-field correction, and optimized image stacking to mitigate cosmic ray contamination and variable background noise. Furthermore, it generates compact source catalogs and highly compressed 1-bit images to facilitate rapid downlink.In-flight performance analysis confirms the pipeline's robustness, demonstrating the availability of VT VHF data for 78 percent of promptly slewed SVOM GRBs, with 56 percent leading to the identification of optical counterparts, typically within 18 minutes post-trigger.
0
0
astro-ph.IM 2026-04-28

SVOM telescope meets sensitivity and stray light specs in tests

SVOM/VT: Flight Model Verification and Pre-launch Testing

Pre-launch verification shows point-source transmittance below 10^-7 and detection to magnitude 22.5, matching early orbit results.

Figure from the paper full image
abstract click to expand
This paper presents pre-launch testing and calibration results for the SVOM/VT (Space-based Variable Objects Monitor, Visible Telescope) Flight Model (FM), validating its performance under simulated space conditions through thermal vacuum cycling, energy concentration analysis, stray light suppression, and CCD/electronics calibrations (gain, noise, quantum efficiency). The results confirm full compliance with design requirements: stray light suppression achieves point-source transmittance $<10^{-7}$ at $30^\circ$ off-axis, thermal control maintains stable CCD temperatures ($-75^\circ$C for the red channel, $-65^\circ$C for the blue channel), and detection sensitivity meets the limiting magnitude of 22.50 (SNR $>$ 3 with 300 seconds exposure). Early in-orbit tests further validate performance, yielding limiting magnitudes of 22.70 (V-band, red) and 22.78 (blue), consistent with pre-launch specifications.
0
0
astro-ph.IM 2026-04-28

GRM pipeline turns GRB detections into analysis products

GRM Scientific Pipeline

Event-driven system on SVOM handles high-energy data in real time and outputs ready-to-use L1B/C files.

Figure from the paper full image
abstract click to expand
The Gamma-Ray Monitor (GRM) is a key payload of the Space-based multiband astronomical Variable Objects Monitor (SVOM) mission, which is designed to detect gamma ray bursts (GRBs) within the energy range of 15 keV to 5 MeV. The GRM Instrument Center (GRM\_IC) features real-time data processing through the X-band, enabling rapid response of high-energy GRB events. The system employs an event-driven architecture and distributed design, achieving efficient processing and real-time monitoring of massive observational data. Through comprehensive data production processes and scientific data product management, the system achieves efficient production of scientific data products of the L1B / C level through the submission of jobs to the task scheduling system. Through modular architecture design and automated processing workflow, the GRM data processing system realizes precise conversion and scientific analysis of GRB detection data, providing robust technical support for future system upgrades and cross-platform collaboration.
0
0
astro-ph.IM 2026-04-28

SVOM's BeiDou-3 system supports 172 GRB detections in first year

SVOM Real-time Response and Collaboration System

Real-time communication enabled 1040 observations including exceptional and multi-messenger targets

Figure from the paper full image
abstract click to expand
The SVOM mission (Space-based multi-band astronomical Variable Objects Monitor) is a Franco-Chinese mission dedicated to the study of the most distant explosions of stars, the gamma-ray bursts. Here, we introduce the real-time response and collaboration system of SVOM, with the adoption of the BeiDou-3 short message communication service. We present the SVOM on-board and on-ground system designs and data flow, together with the collaboration mechanism with other missions. In the first year of the in-flight operation, SVOM has detected 172 gamma-ray bursts, including 147 by the GRM instrument and 62 by the ECLAIRs instrument. At the same time, SVOM has performed 1040 observations, including 122 ToO-EX(Target of Opportunity-Exceptional) observations, 48 ToO-MM(Target of Opportunity-Multi-messenger) observations and 870 ToO-NOM(Target of Opportunity-Nominal) observations. All these have increased the scientific output of the mission.
0

browse all of astro-ph.IM β†’ full archive Β· search Β· sub-categories