Recognition: no theorem link
labrador: A domain-optimized machine-learning tool for gravitational wave inference
Pith reviewed 2026-05-10 18:11 UTC · model grok-4.3
The pith
A neural network for gravitational wave inference reaches 1% median efficiency over a broad mass range by incorporating physical symmetries.
A machine-rendered reading of the paper's core claim, the machinery that carries it, and where it could break.
Core claim
By compressing gravitational wave data through heterodyning against an approximately maximized reference waveform, transforming to degeneracy-removing coordinates, and folding the parameter space, the Labrador model becomes approximately equivariant. This equivariance reduces training cost and enables the first neural posterior estimation code to cover long-duration signals with secondary masses below 10 solar masses, while achieving a median importance-sampling efficiency of 1 percent on quadrupolar aligned-spin signals across the stated mass range. A numerically stable procedure further allows neural posterior estimation even when simulation and inference priors differ.
What carries the argument
An approximately equivariant neural posterior estimator constructed through data heterodyning against a reference waveform, degeneracy-removing coordinate transformations, and parameter-space folding.
Load-bearing premise
The combination of heterodyning, tailored coordinates, and parameter folding produces an approximately equivariant network whose performance gains hold without hidden accuracy losses or unaccounted biases when applied to real data or differing priors.
What would settle it
A side-by-side comparison of Labrador and standard sampler results on simulated long-duration gravitational wave signals with secondary masses below 10 solar masses, checking whether the recovered parameter distributions agree within statistical expectations.
Figures
read the original abstract
Fast and reliable inference of gravitational-wave source parameters is crucial for analyzing large catalogs that are reaching the size of hundreds of detections, and for identifying short-lived electromagnetic counterparts. Neural posterior estimation has emerged as a powerful inference method, where the model is trained on simulated gravitational-wave data at considerable computational cost, but thereafter enables extremely fast and inexpensive inference at test time. Here, we extend this approach by incorporating domain-specific physical insights and methods in the model architecture. These include compressing the data by heterodyning against a reference waveform chosen via approximate likelihood maximization, removing parameter degeneracies through tailored coordinate systems, and eliminating known multimodalities by folding the parameter space. As a result, the network is approximately equivariant to changes in the source parameters, and achieves a reduced training cost and improved model interpretability. Our implementation, called labrador, can be trained end-to-end on a 1-day timescale on $\sim 10^2$ CPU cores and a V100 GPU, achieving a median importance-sampling efficiency of 1% on quadrupolar, aligned-spin signals in a broad mass range (chirp mass $\mathcal M \in 1\text{-}50\,\mathrm{M}_\odot$, mass ratio $q > 0.1$). labrador is the first neural inference code to achieve extensive coverage of long-duration signals with secondary masses $m_2 < 10\,\mathrm{M}_\odot$, rendered possible by its equivariance property. Among our novel contributions is a numerically stable procedure that enables neural posterior estimation when the simulation and inference priors differ.
Editorial analysis
A structured set of objections, weighed in public.
Referee Report
Summary. The manuscript introduces labrador, a neural posterior estimation framework for gravitational-wave source parameters that incorporates domain-specific optimizations: heterodyning the data against a reference waveform selected via approximate likelihood maximization, tailored coordinate systems to remove degeneracies, and parameter-space folding to eliminate multimodalities. These steps render the network approximately equivariant, yielding reduced training cost (1 day on ~100 CPU cores plus one V100 GPU) and a reported median importance-sampling efficiency of 1% for quadrupolar, aligned-spin signals over chirp mass 1–50 M⊙ and mass ratio q > 0.1. The work claims to be the first neural code to achieve extensive coverage of long-duration signals with secondary masses m2 < 10 M⊙ and presents a numerically stable procedure enabling neural inference when simulation and inference priors differ.
Significance. If the performance and coverage claims are substantiated by detailed benchmarks, the work would constitute a meaningful step toward scalable inference on large gravitational-wave catalogs. The combination of physical domain adaptations to achieve approximate equivariance and the stable prior-mismatch procedure could reduce computational barriers for low-mass, long-duration signals while offering reusable techniques for physics-informed neural networks where training and target distributions differ.
major comments (3)
- [Abstract] Abstract: The central performance claim of a 1% median importance-sampling efficiency is stated without supporting details on the number of test injections, the efficiency distribution across the mass range, error bars, or direct comparisons to baseline neural posterior estimation methods without the equivariance steps. These omissions make it impossible to verify whether the reported efficiency reflects genuine gains or hidden accuracy losses in the long-duration, low-mass regime.
- [Abstract] Abstract and results: The claim that the equivariance property (via heterodyning, tailored coordinates, and parameter folding) enables the first extensive coverage of m2 < 10 M⊙ long-duration signals rests on the untested assumption that these transformations preserve posterior fidelity without introducing systematic shifts. No quantitative checks—such as posterior coverage probabilities, KL divergence to reference samplers, or recovery tests on injected signals—are described to confirm the approximation remains faithful under real noise or prior mismatch.
- [Methods] The novel numerically stable procedure for neural posterior estimation under differing simulation and inference priors is presented only at high level. Because this procedure is invoked to justify the broad coverage claim, the manuscript must supply the explicit formulation (e.g., the form of the importance weights or regularization term) together with validation experiments demonstrating that posterior accuracy is maintained when the priors differ.
minor comments (2)
- [Abstract] The abstract reports training details (~10^2 CPU cores, V100 GPU, 1-day timescale) but does not compare these costs or the achieved efficiency against prior neural GW inference codes; a concise comparison table would strengthen the contribution statement.
- Clarify the precise definition of 'importance-sampling efficiency' (e.g., effective sample size over total samples) and how it is computed from the network outputs, as this metric underpins all performance claims.
Simulated Author's Rebuttal
We thank the referee for their insightful comments on our manuscript introducing labrador. We have carefully considered each point and made revisions to address the concerns about substantiating our performance claims and providing more details on our methods. Our point-by-point responses follow.
read point-by-point responses
-
Referee: [Abstract] Abstract: The central performance claim of a 1% median importance-sampling efficiency is stated without supporting details on the number of test injections, the efficiency distribution across the mass range, error bars, or direct comparisons to baseline neural posterior estimation methods without the equivariance steps. These omissions make it impossible to verify whether the reported efficiency reflects genuine gains or hidden accuracy losses in the long-duration, low-mass regime.
Authors: We agree that the abstract would benefit from more supporting details to allow verification of the performance claim. In the revised manuscript, we have updated the abstract to reference the size of the test set and the range of efficiencies observed. Furthermore, we have added a dedicated results subsection with the efficiency distribution across the mass range, including error bars, and a direct comparison to a baseline neural posterior estimation approach lacking the equivariance steps. These additions demonstrate that the reported efficiency represents genuine improvements without compromising accuracy in the long-duration, low-mass regime. revision: yes
-
Referee: [Abstract] Abstract and results: The claim that the equivariance property (via heterodyning, tailored coordinates, and parameter folding) enables the first extensive coverage of m2 < 10 M⊙ long-duration signals rests on the untested assumption that these transformations preserve posterior fidelity without introducing systematic shifts. No quantitative checks—such as posterior coverage probabilities, KL divergence to reference samplers, or recovery tests on injected signals—are described to confirm the approximation remains faithful under real noise or prior mismatch.
Authors: We thank the referee for emphasizing the importance of validating the fidelity of the equivariance approximations. Although the manuscript design choices were motivated by preserving posterior structure, we acknowledge that more explicit quantitative checks would strengthen the claims. In the revised version, we have added quantitative validation including posterior coverage probabilities, KL divergence comparisons to reference samplers, and recovery tests on injected signals under real noise and prior mismatch conditions. These checks, now detailed in the results section, confirm that the transformations do not introduce significant systematic shifts and support the extensive coverage of long-duration low-mass signals. revision: yes
-
Referee: [Methods] The novel numerically stable procedure for neural posterior estimation under differing simulation and inference priors is presented only at high level. Because this procedure is invoked to justify the broad coverage claim, the manuscript must supply the explicit formulation (e.g., the form of the importance weights or regularization term) together with validation experiments demonstrating that posterior accuracy is maintained when the priors differ.
Authors: We agree that the description of the numerically stable procedure for differing priors requires more detail. In the revised manuscript, we have expanded the Methods section to include the explicit mathematical formulation of the importance weights and the regularization term used to ensure numerical stability. We have also added validation experiments that demonstrate posterior accuracy is preserved when the simulation and inference priors differ, including direct comparisons with standard sampling methods. These revisions provide the necessary support for the broad coverage claims. revision: yes
Circularity Check
No circularity: performance claims derive from independent architectural choices, not self-referential fits or definitions
full rationale
The paper presents Labrador as an engineering implementation that compresses data via heterodyning, adopts tailored coordinates to remove degeneracies, and folds parameter space to eliminate multimodalities. These steps are described as producing approximate equivariance, which in turn enables the reported 1% median importance-sampling efficiency and extended coverage of long-duration low-mass signals. No equation or procedure is shown to define the efficiency or coverage in terms of the network outputs themselves; the numerically stable mismatched-prior procedure is presented as a separate numerical contribution. The derivation chain therefore remains self-contained against external benchmarks (simulated signals, importance sampling) rather than reducing to a fit or self-citation by construction.
Axiom & Free-Parameter Ledger
free parameters (1)
- neural network weights and hyperparameters
axioms (1)
- domain assumption Quadrupolar, aligned-spin waveforms suffice for the targeted mass range and the chosen reference waveform yields adequate compression.
Reference graph
Works this paper leans on
-
[1]
chirp distance
Extrinsic parameters With the heuristic that the amplitude, phase and time of arrival at each detector are usually well measured, Rouletet al.[27, Table I] recast the extrinsic parame- ters in terms of the detector-network-based sky location angles (θ net, ˆϕnet); and, for a chosen reference detector, the arrival timet k0, phase ˆϕref, and inverse-amplitu...
-
[2]
Intrinsic parameters For low-mass (inspiral-dominated) sources, Leeet al
-
[3]
The quantities (µ1, µ2) are constructed as the principal components of the Fisher information matrix for the 1.5 PN phase coef- ficients of the waveform
introduced coordinates (µ1, µ2, q, χ2z), whereµ 1 and µ2 are functions of (M, q, χ 1z, χ2z), withMthe chirp mass,qthe mass ratio, andχ 1z andχ 2z the components of the primary and secondary dimensionless spin vectors along the orbital angular momentum. The quantities (µ1, µ2) are constructed as the principal components of the Fisher information matrix for...
-
[4]
a source overhead from one underfoot,
-
[5]
right- from left-handed polarization,
-
[6]
an orbital phase shift ofπ,
-
[7]
Following Rouletet al.[27], we fold the parame- ter space along these symmetry directions to produce unimodal posteriors
a simultaneous phase and polarization shift byπ/2; these approximate discrete symmetries generically give rise to posteriors with up to 2 4 = 16 modes (occasionally fewer, for events with high signal-to-noise ratio, three detectors, or measurable higher-order modes). Following Rouletet al.[27], we fold the parame- ter space along these symmetry directions...
-
[8]
the polarization angleψ∈[0, π)
Some parameters are periodic, e.g. the polarization angleψ∈[0, π). If the posterior peaks near 0/π, it can split, producing a spurious bimodality
-
[9]
Certain parameters have sharp boundaries, e.g.q≤ 1, requiring more expressive flows
-
[10]
counterweight
Naturally, the location and scale of the posterior depend on the data. To address these issues, we introduce an invertible rescal- ing transformθ F ↔θ R designed so that, ideally, the rescaled posteriorp(θ R |d) resembles a standard Gaus- sian. The transformation has three main stages: a. Unconstraining transform.Parameters bounded to a finite interval ar...
-
[11]
17 We definee kf n ≡Q kf n/Wkf for the first functions de- scribing pure detector phase and time shifts,n <2N det
Extrinsic parameters We first do a QR decomposition of the (weighted)P matrix, to obtain orthogonalized phase and time coordi- nates: Wkf Pkf n = X m Qkf mRmn ,(A6) whereQis orthogonal (thinking ofkfas a joint index) andRis upper triangular: X kf Qkf nQkf m =δ nm ,(A7) Rm>n = 0.(A8) 2 For concreteness we illustrate with a Hanford–Livingston net- work; gen...
-
[12]
As it turns out, for inspiral- dominated (low-mass) systems the 1PN and 1.5PN terms are comparable in size but quite degenerate
Intrinsic parameters The remaining components encode the dependence on intrinsic parameters. As it turns out, for inspiral- dominated (low-mass) systems the 1PN and 1.5PN terms are comparable in size but quite degenerate. 3 In other words, we need to include the 0PN, 1PN, 1.5PN to accu- rately modeling the signal, but the space of phase profiles associate...
-
[13]
To make them amenable for the neural network, we express certain features in relative rather than absolute form
F eature design The reference-waveform parameters, which form part of the data representation, serve as context for the con- ditional normalizing flow. To make them amenable for the neural network, we express certain features in relative rather than absolute form. Changes in the distance, orbital phase, or merger time produce a global rescaling, phase shi...
-
[14]
B. P. Abbottet al.(LIGO Scientific Collaboration and Virgo Collaboration), Phys. Rev. Lett.119, 161101 (2017)
2017
-
[15]
B. P. Abbottet al., The Astrophysical Journal Letters 892, L3 (2020)
2020
-
[16]
W. Niu, C. Hanna, C.-J. Haster, S. Adhicary, P. Baral, A. Baylor, B. Cousins, J. D. E. Creighton, H. Fong, Y.- J. Huang, R. Huxford, P. Joshi, J. Kennington, A. K. Y. Li, R. Magee, D. Meacher, C. Messick, S. Morisaki, C. Posnansky, S. Sachdev, S. Sakon, U. Shah, D. Singh, R. Tapia, L. Tsukada, A. Viets, Z. Yarbrough, and N. Zhang, GW231109 235456: A sub-t...
-
[17]
Abbottet al., The Astrophysical Journal Letters 915, L5 (2021)
R. Abbottet al., The Astrophysical Journal Letters 915, L5 (2021)
2021
-
[18]
A. G. Abacet al., The Astrophysical Journal Letters 970, L34 (2024)
2024
-
[19]
T. L. S. Collaboration, the Virgo Collaboration, and the KAGRA Collaboration, GWTC-4.0: Updating the Gravitational-Wave Transient Catalog with observa- tions from the first part of the fourth LIGO–Virgo– KAGRA observing run (2025), arXiv:2508.18082 [gr- qc]
work page internal anchor Pith review Pith/arXiv arXiv 2025
-
[20]
A. H. Nitz, S. Kumar, Y.-F. Wang, S. Kastha, S. Wu, M. Sch¨ afer, R. Dhurkunde, and C. D. Capano, The As- trophysical Journal946, 59 (2023)
2023
-
[21]
D. Wadekar, J. Roulet, T. Venumadhav, A. K. Mehta, B. Zackay, J. Mushkin, S. Olsen, and M. Zaldarriaga, New black hole mergers in the LIGO–Virgo O3 data from a gravitational wave search including higher-order harmonics (2025), arXiv:2312.06631 [gr-qc]
-
[22]
F. S. Broekgaarden, S. Banagiri, and E. Payne, The Astrophysical Journal969, 108 (2024)
2024
-
[23]
Aasiet al., Classical and Quantum Gravity32, 074001 (2015)
J. Aasiet al., Classical and Quantum Gravity32, 074001 (2015)
2015
-
[24]
Acerneseet al., Classical and Quantum Gravity32, 024001 (2014)
F. Acerneseet al., Classical and Quantum Gravity32, 024001 (2014)
2014
-
[25]
Akutsu, M
T. Akutsu, M. Ando, K. Arai, Y. Arai, S. Araki, A. Araya, N. Aritomi, H. Asada, Y. Aso, S. Atsuta, K. Awai, S. Bae, L. Baiotti, M. A. Barton, K. Cannon, E. Capocasa, C.-S. Chen, T.-W. Chiu, K. Cho, Y.-K. Chu, K. Craig, W. Creus, K. Doi, K. Eda, Y. Enomoto, R. Flaminio, Y. Fujii, M.-K. Fujimoto, M. Fukunaga, M. Fukushima, T. Furuhata, S. Haino, K. Hasegawa...
2019
-
[26]
Reitze, R
D. Reitze, R. X. Adhikari, S. Ballmer, B. Barish, L. Bar- sotti, G. Billingsley, D. A. Brown, Y. Chen, D. Coyne, R. Eisenstein, M. Evans, P. Fritschel, E. D. Hall, A. Laz- zarini, G. Lovelace, J. Read, B. S. Sathyaprakash, D. Shoemaker, J. Smith, C. Torrie, S. Vitale, R. Weiss, C. Wipf, and M. Zucker, Bulletin of the AAS51(2019), https://baas.aas.org/pub/...
2019
-
[27]
S. Hild, M. Abernathy, F. Acernese, P. Amaro-Seoane, N. Andersson, K. Arun, F. Barone, B. Barr, M. Bar- suglia, M. Beker, N. Beveridge, S. Birindelli, S. Bose, L. Bosi, S. Braccini, C. Bradaschia, T. Bulik, E. Cal- loni, G. Cella, E. C. Mottin, S. Chelkowski, A. Chin- carini, J. Clark, E. Coccia, C. Colacino, J. Colas, A. Cumming, L. Cunningham, E. Cuoco,...
2011
-
[28]
Punturo, M
M. Punturo, M. Abernathy, F. Acernese, B. Allen, N. Andersson, K. Arun, F. Barone, B. Barr, M. Bar- suglia, M. Beker, N. Beveridge, S. Birindelli, S. Bose, L. Bosi, S. Braccini, C. Bradaschia, T. Bulik, E. Cal- loni, G. Cella, E. C. Mottin, S. Chelkowski, A. Chin- carini, J. Clark, E. Coccia, C. Colacino, J. Colas, A. Cumming, L. Cunningham, E. Cuoco, S. ...
2010
-
[29]
Capote, W
E. Capote, W. Jia, N. Aritomi, M. Nakano, V. Xu, R. Abbott, I. Abouelfettouh, R. X. Adhikari, A. Ananyeva, S. Appert, S. K. Apple, K. Arai, S. M. Aston, M. Ball, S. W. Ballmer, D. Barker, L. Bar- sotti, B. K. Berger, J. Betzwieser, D. Bhattachar- jee, G. Billingsley, S. Biscans, C. D. Blair, N. Bode, E. Bonilla, V. Bossilkov, A. Branch, A. F. Brooks, D. D...
2025
-
[30]
Metropolis, A
N. Metropolis, A. W. Rosenbluth, M. N. Rosenbluth, A. H. Teller, and E. Teller, The Journal of Chemical Physics21, 1087–1092 (1953)
1953
-
[31]
W. K. Hastings, Biometrika57, 97–109 (1970)
1970
-
[32]
J. Skilling, Bayesian Analysis1, 10.1214/06-ba127 (2006)
-
[33]
Veitch, V
J. Veitch, V. Raymond, B. Farr, W. Farr, P. Graff, S. Vitale, B. Aylott, K. Blackburn, N. Christensen, M. Coughlin, W. Del Pozzo, F. Feroz, J. Gair, C.- J. Haster, V. Kalogera, T. Littenberg, I. Mandel, R. O’Shaughnessy, M. Pitkin, C. Rodriguez, C. R¨ over, T. Sidery, R. Smith, M. Van Der Sluys, A. Vecchio, W. Vousden, and L. Wade, Phys. Rev. D91, 042003 (2015)
2015
-
[34]
Rapid and accurate parameter inference for coalescing, precessing compact binaries
J. Lange, R. O’Shaughnessy, and M. Rizzo, Rapid and accurate parameter inference for coalescing, precessing compact binaries (2018), arXiv:1805.10457 [gr-qc]
work page Pith review arXiv 2018
-
[35]
Ashton, M
G. Ashton, M. H¨ ubner, P. D. Lasky, C. Talbot, K. Ack- ley, S. Biscoveanu, Q. Chu, A. Divakarla, P. J. Easter, B. Goncharov, F. H. Vivanco, J. Harms, M. E. Lower, G. D. Meadors, D. Melchor, E. Payne, M. D. Pitkin, J. Powell, N. Sarin, R. J. E. Smith, and E. Thrane, The Astrophysical Journal Supplement Series241, 27 (2019)
2019
-
[36]
Ashton and C
G. Ashton and C. Talbot, Monthly Notices of the Royal Astronomical Society507, 2037–2051 (2021)
2037
-
[37]
C. M. Biwer, C. D. Capano, S. De, M. Cabero, D. A. Brown, A. H. Nitz, and V. Raymond, Publications of the Astronomical Society of the Pacific131, 024503 (2019)
2019
-
[38]
Breschi, R
M. Breschi, R. Gamba, and S. Bernuzzi, Phys. Rev. D 104, 042001 (2021)
2021
-
[39]
N. J. Cornish, Phys. Rev. D103, 104057 (2021)
2021
-
[40]
Roulet, S
J. Roulet, S. Olsen, J. Mushkin, T. Islam, T. Venumad- hav, B. Zackay, and M. Zaldarriaga, Phys. Rev. D106, 123015 (2022)
2022
-
[41]
Fairhurst, C
S. Fairhurst, C. Hoy, R. Green, C. Mills, and S. A. Us- man, Phys. Rev. D108, 082006 (2023)
2023
-
[42]
Tiwari, C
V. Tiwari, C. Hoy, S. Fairhurst, and D. MacLeod, Phys. Rev. D108, 023001 (2023)
2023
-
[43]
K. W. K. Wong, M. Isi, and T. D. P. Edwards, The Astrophysical Journal958, 129 (2023)
2023
-
[44]
A. J. K. Chua and M. Vallisneri, Phys. Rev. Lett.124, 041102 (2020). 20
2020
-
[45]
S. R. Green, C. Simpson, and J. Gair, Phys. Rev. D 102, 104057 (2020)
2020
-
[46]
Gabbard, C
H. Gabbard, C. Messenger, I. S. Heng, F. Tonolini, and R. Murray-Smith, Nature Physics18, 112–117 (2021)
2021
-
[47]
S. R. Green and J. Gair, Machine Learning: Science and Technology2, 03LT01 (2021)
2021
-
[48]
M. Dax, S. R. Green, J. Gair, J. H. Macke, A. Buo- nanno, and B. Sch¨ olkopf, Phys. Rev. Lett.127, 241103 (2021)
2021
-
[49]
M. Dax, S. R. Green, J. Gair, M. Deistler, B. Sch¨ olkopf, and J. H. Macke, inInternational Conference on Learn- ing Representations(2022)
2022
-
[50]
M. Dax, S. R. Green, J. Gair, M. P¨ urrer, J. Wildberger, J. H. Macke, A. Buonanno, and B. Sch¨ olkopf, Phys. Rev. Lett.130, 171403 (2023)
2023
- [51]
-
[52]
Chatterjee, E
D. Chatterjee, E. Marx, W. Benoit, R. Kumar, M. De- sai, E. Govorkova, A. Gunny, E. Moreno, R. Omer, R. Raikman, M. Saleem, S. Aggarwal, M. W. Cough- lin, P. Harris, and E. Katsavounidis, Machine Learning: Science and Technology5, 045030 (2024)
2024
-
[53]
M. Dax, S. R. Green, J. Gair, N. Gupte, M. P¨ urrer, V. Raymond, J. Wildberger, J. H. Macke, A. Buonanno, and B. Sch¨ olkopf, Nature639, 49–53 (2025)
2025
- [54]
-
[55]
A. Spadaro, J. Gair, D. Gerosa, S. R. Green, R. Bus- cicchio, N. Gupte, R. Tenorio, S. Clyne, M. P¨ urrer, and N. Korsakova, Accurate and efficient simulation- based inference for massive black-hole binaries with LISA (2026), arXiv:2603.20431 [astro-ph.HE]
-
[56]
Rezende and S
D. Rezende and S. Mohamed, inProceedings of the 32nd International Conference on Machine Learning, Pro- ceedings of Machine Learning Research, Vol. 37, edited by F. Bach and D. Blei (PMLR, Lille, France, 2015) pp. 1530–1538
2015
-
[57]
J. Ho, A. Jain, and P. Abbeel, inAdvances in Neu- ral Information Processing Systems, Vol. 33, edited by H. Larochelle, M. Ranzato, R. Hadsell, M. Balcan, and H. Lin (Curran Associates, Inc., 2020) pp. 6840–6851
2020
-
[58]
Flow Matching for Generative Modeling
Y. Lipman, R. T. Q. Chen, H. Ben-Hamu, M. Nickel, and M. Le, Flow matching for generative modeling (2023), arXiv:2210.02747 [cs.LG]
work page internal anchor Pith review Pith/arXiv arXiv 2023
-
[59]
Leyde, S
K. Leyde, S. R. Green, A. Toubiana, and J. Gair, Phys. Rev. D109, 064056 (2024)
2024
-
[60]
Andral, Combining normalizing flows and quasi- Monte Carlo (2024), arXiv:2401.05934 [stat.CO]
C. Andral, Combining normalizing flows and quasi- Monte Carlo (2024), arXiv:2401.05934 [stat.CO]
-
[61]
Mandel, W
I. Mandel, W. M. Farr, and J. R. Gair, Monthly Notices of the Royal Astronomical Society486, 1086 (2019)
2019
-
[62]
2019, PASA, 36, e010, doi: 10.1017/pasa.2019.2
E. Thrane and C. Talbot, Publications of the Astronom- ical Society of Australia36, 10.1017/pasa.2019.2 (2019)
-
[63]
Roulet, T
J. Roulet, T. Venumadhav, B. Zackay, L. Dai, and M. Zaldarriaga, Phys. Rev. D102, 123022 (2020)
2020
-
[64]
Roulet, L
J. Roulet, L. Dai, T. Venumadhav, B. Zackay, and M. Zaldarriaga, Phys. Rev. D99, 123022 (2019)
2019
-
[65]
S. Roy, A. S. Sengupta, and P. Ajith, Phys. Rev. D99, 024048 (2019)
2019
-
[66]
Schmidt, S
S. Schmidt, S. Caudill, J. D. E. Creighton, R. Magee, L. Tsukada, S. Adhicary, P. Baral, A. Baylor, K. Can- non, B. Cousins, B. Ewing, H. Fong, R. N. George, P. Godwin, C. Hanna, R. Harada, Y.-J. Huang, R. Hux- ford, P. Joshi, J. Kennington, S. Kuwahara, A. K. Y. Li, D. Meacher, C. Messick, S. Morisaki, D. Mukherjee, W. Niu, A. Pace, C. Posnansky, A. Ray,...
2024
-
[67]
Harry, J
I. Harry, J. C. Bustillo, and A. Nitz, Phys. Rev. D97, 023004 (2018)
2018
-
[68]
Chandra, J
K. Chandra, J. C. Bustillo, A. Pai, and I. W. Harry, Phys. Rev. D106, 123003 (2022)
2022
-
[69]
Schmidt, B
S. Schmidt, B. Gadre, and S. Caudill, Phys. Rev. D109, 042005 (2024)
2024
-
[70]
K. S. Phukon, P. Schmidt, and G. Pratten, Phys. Rev. D111, 043040 (2025)
2025
-
[71]
Babak, R
S. Babak, R. Biswas, P. R. Brady, D. A. Brown, K. Can- non, C. D. Capano, J. H. Clayton, T. Cokelaer, J. D. E. Creighton, T. Dent, A. Dietz, S. Fairhurst, N. Fotopou- los, G. Gonz´ alez, C. Hanna, I. W. Harry, G. Jones, D. Keppel, D. J. A. McKechan, L. Pekowsky, S. Privit- era, C. Robinson, A. C. Rodriguez, B. S. Sathyaprakash, A. S. Sengupta, M. Vallisne...
2013
-
[72]
Blanchet, Living Reviews in Relativity5, 10.12942/lrr-2002-3 (2002)
L. Blanchet, Living Reviews in Relativity5, 10.12942/lrr-2002-3 (2002)
-
[73]
D. A. Brown, P. Kumar, and A. H. Nitz, Phys. Rev. D 87, 082004 (2013)
2013
-
[74]
Allen, W
B. Allen, W. G. Anderson, P. R. Brady, D. A. Brown, and J. D. E. Creighton, Phys. Rev. D85, 122006 (2012)
2012
-
[75]
Storn and K
R. Storn and K. Price, Journal of Global Optimization 11, 341–359 (1997)
1997
-
[76]
Virtanen, R
P. Virtanen, R. Gommers, T. E. Oliphant, M. Haber- land, T. Reddy, D. Cournapeau, E. Burovski, P. Pe- terson, W. Weckesser, J. Bright, S. J. van der Walt, M. Brett, J. Wilson, K. J. Millman, N. Mayorov, A. R. J. Nelson, E. Jones, R. Kern, E. Larson, C. J. Carey, I. Polat, Y. Feng, E. W. Moore, J. VanderPlas, D. Laxalde, J. Perktold, R. Cimrman, I. Henriks...
2020
- [77]
- [78]
-
[79]
N. J. Cornish, Phys. Rev. D104, 104054 (2021)
2021
-
[80]
A. H. Nitz, K. Kacanja, and K. Soni, Beyond FIND- CHIRP: Breaking the memory wall and optimal FFTs for gravitational-wave matched-filter searches with ratio-filter dechirping (2026), arXiv:2601.18835 [astro- ph.IM]
work page internal anchor Pith review Pith/arXiv arXiv 2026
discussion (0)
Sign in with ORCID, Apple, or X to comment. Anyone can read and Pith papers without signing in.