pith. machine review for the scientific record. sign in

arxiv: 2603.09178 · v2 · submitted 2026-03-10 · 🌌 astro-ph.IM

Recognition: 2 theorem links

· Lean Theorem

Characterizing the Instrumental Profile of LAMOST

Authors on Pith no claims yet

Pith reviewed 2026-05-15 14:02 UTC · model grok-4.3

classification 🌌 astro-ph.IM
keywords LAMOSTinstrumental profileradial velocityneural networkmulti-layer perceptronspectroscopic calibrationbinary starswavelength calibration
0
0 comments X

The pith

A neural network model of LAMOST's instrumental profile reduces the scatter in stellar radial velocity measurements by about 3 km/s.

A machine-rendered reading of the paper's core claim, the machinery that carries it, and where it could break.

The paper trains a multi-layer perceptron neural network on arc lamp spectra to characterize the instrumental profile of the LAMOST telescope. The profile varies with fiber, wavelength, and time, which traditional fitting methods handle poorly. Once trained, the network supplies the profile for any specific fiber, wavelength, or observation epoch. When this profile is inserted into the radial velocity pipeline, the dispersion among repeated measurements of the same stars falls by roughly 3 km/s. The gain directly improves the ability to detect small, long-term velocity changes that signal long-period binary stars.

Core claim

The authors construct a multi-layer perceptron based on The Payne neural network to derive IPs for LAMOST. After training, the model can retrieve the IP for any fiber, at any wavelength, and at any time. They then apply the derived IP to stellar radial velocity measurements and analyze the impact of different IP center localization methods. The dispersion of the measured RVs is reduced by approximately 3 km/s, which will facilitate the search for long-period binary stars via RV variations.

What carries the argument

A multi-layer perceptron neural network trained to output the instrumental profile from observed arc-lamp emission-line spectra.

If this is right

  • The IP can be obtained for every fiber, every wavelength, and every observation epoch from a single trained model.
  • Choice of IP center localization method measurably changes the final radial velocity values.
  • Lower RV dispersion enables detection of long-period binary stars through their velocity variations.
  • Neural networks can replace traditional parametric fits when the instrumental profile varies in complex ways.

Where Pith is reading between the lines

These are editorial extensions of the paper, not claims the author makes directly.

  • The same training approach could be adapted to other fiber-fed spectrographs that use arc lamps for calibration.
  • Improved RV precision might allow surveys to measure smaller velocity signals from stellar activity or low-mass companions.
  • Testing the network on synthetic spectra with injected, known IPs would quantify any residual bias.

Load-bearing premise

The neural network accurately captures the true instrumental profile across all fibers, wavelengths, and times without introducing new systematic errors into the radial velocity data.

What would settle it

A direct comparison of radial velocity dispersions measured on the same set of stars using the neural-network IP versus a standard parametric IP; the dispersion would need to drop by ~3 km/s for the claim to hold.

read the original abstract

The instrumental profile (IP) of a telescope is of great significance for spectroscopic analyses, especially for wavelength calibration and stellar parameter measurements. The Large Sky Area Multi-Object Fiber Spectroscopic Telescope (LAMOST) employs arc lamps for wavelength calibration. These lamps produce sharp emission lines with known wavelengths, and the observed arc lamp spectra can well characterize the IP. However, IPs are influenced by multiple factors, making them difficult to model accurately with traditional methods. Neural networks, which can automatically capture complex patterns and nonlinear features in data, provide a promising approach for high-precision IP measurement. We therefore construct a multi-layer perceptron (MLP) based on The Payne neural network to derive IPs for LAMOST. After training, the model can retrieve the IP for any fiber, at any wavelength, and at any time. We then apply the derived IP to stellar radial velocity (RV) measurements and analyze the impact of different IP center localization methods on the results. Finally, the dispersion of the measured RVs is reduced by approximately 3 km/s. This improvement will facilitate the search for long-period binary stars via RV variations.

Editorial analysis

A structured set of objections, weighed in public.

Desk editor's note, referee report, simulated authors' rebuttal, and a circularity audit. Tearing a paper down is the easy half of reading it; the pith above is the substance, this is the friction.

Referee Report

2 major / 2 minor

Summary. The manuscript trains a multi-layer perceptron (adapted from The Payne) on LAMOST arc-lamp emission lines to predict the instrumental profile (IP) as a function of fiber, wavelength, and time. The derived IP is then inserted into the stellar radial-velocity extraction pipeline, yielding a reported reduction in RV dispersion of approximately 3 km/s that is said to aid long-period binary searches.

Significance. If the reported improvement is shown to be robust and free of illumination-induced systematics, the work supplies a practical, interpolating model for IP characterization in a large multi-fiber survey. The neural-network formulation that returns IP(fiber, wavelength, time) on demand is a clear technical strength and could be adopted by other fiber-fed spectrographs.

major comments (2)
  1. [Abstract] Abstract: the central claim of a ~3 km/s reduction in RV dispersion is presented without a baseline value, sample size, uncertainty estimate, or comparison to the pipeline that was used before the new IP was inserted. These quantities are required to judge whether the improvement is statistically meaningful and attributable to the neural-network IP rather than to other pipeline changes.
  2. [Application to stellar spectra] Application section (following training description): arc-lamp spectra illuminate the entire fiber aperture, whereas stellar light is a seeing-convolved point source whose weighting across the aperture is affected by guiding and aberrations. The manuscript does not quantify or correct for the resulting difference in effective line-spread function; without such a test (e.g., via simulated stellar profiles or on-sky standards) the transfer of the arc-lamp IP to stellar RV measurements remains an unverified assumption.
minor comments (2)
  1. Specify the exact MLP architecture (number of hidden layers, neurons per layer, activation functions) and the train/validation/test split sizes used for the arc-lamp data.
  2. Add a short table or figure showing the measured RV dispersion before and after IP application, together with the number of stars and the wavelength range employed.

Simulated Author's Rebuttal

2 responses · 0 unresolved

We thank the referee for the constructive comments. We address each major point below and have revised the manuscript to improve clarity and provide additional context where possible.

read point-by-point responses
  1. Referee: [Abstract] Abstract: the central claim of a ~3 km/s reduction in RV dispersion is presented without a baseline value, sample size, uncertainty estimate, or comparison to the pipeline that was used before the new IP was inserted. These quantities are required to judge whether the improvement is statistically meaningful and attributable to the neural-network IP rather than to other pipeline changes.

    Authors: We agree that the abstract requires additional context to allow proper evaluation of the result. In the revised manuscript we have updated the abstract to state the baseline RV dispersion measured with the standard LAMOST pipeline on the same stellar sample, the number of stars used for the comparison, and the uncertainty on the reported dispersion reduction. We also explicitly note that the neural-network IP was substituted into the existing RV extraction code while holding all other pipeline elements fixed. revision: yes

  2. Referee: [Application to stellar spectra] Application section (following training description): arc-lamp spectra illuminate the entire fiber aperture, whereas stellar light is a seeing-convolved point source whose weighting across the aperture is affected by guiding and aberrations. The manuscript does not quantify or correct for the resulting difference in effective line-spread function; without such a test (e.g., via simulated stellar profiles or on-sky standards) the transfer of the arc-lamp IP to stellar RV measurements remains an unverified assumption.

    Authors: This is a legitimate concern regarding the difference in illumination. The manuscript follows the standard practice of using arc-lamp IPs for stellar RV work, and the observed reduction in RV scatter is consistent across multiple nights. In the revised version we have added a short discussion of the illumination difference and its possible effect on the effective LSF, together with a brief comparison against a small set of on-sky RV standards. A full end-to-end simulation of seeing-convolved stellar profiles lies outside the scope of the present study. revision: partial

Circularity Check

0 steps flagged

No circularity in IP model derivation or RV improvement claim

full rationale

The paper trains an MLP (adapted from The Payne) exclusively on independent arc-lamp emission-line spectra to predict IP(fiber, wavelength, time). This trained model is then applied downstream to separate stellar spectra for RV extraction, with the reported ~3 km/s dispersion reduction observed as an empirical result on the stellar data. No equations or steps reduce the claimed improvement to a fitted parameter by construction, no self-definitional loops exist, and the arc-lamp training set is distinct from the stellar RV test set. The derivation chain remains self-contained against external benchmarks.

Axiom & Free-Parameter Ledger

1 free parameters · 1 axioms · 0 invented entities

The claim rests on the assumption that arc-lamp spectra fully encode the IP and that the neural network generalizes without bias; no new physical entities are introduced.

free parameters (1)
  • MLP weights and biases
    Network parameters are fitted to arc-lamp spectra during training.
axioms (1)
  • domain assumption Arc lamp emission lines accurately sample the instrumental profile
    The paper states that observed arc lamp spectra can characterize the IP.

pith-pipeline@v0.9.0 · 5526 in / 1110 out tokens · 84361 ms · 2026-05-15T14:02:03.964560+00:00 · methodology

discussion (0)

Sign in with ORCID, Apple, or X to comment. Anyone can read and Pith papers without signing in.

Lean theorems connected to this paper

Citations machine-checked in the Pith Canon. Every link opens the source theorem in the public Lean library.

What do these tags mean?
matches
The paper's claim is directly supported by a theorem in the formal canon.
supports
The theorem supports part of the paper's argument, but the paper may add assumptions or extra steps.
extends
The paper goes beyond the formal theorem; the theorem is a base layer rather than the whole result.
uses
The paper appears to rely on the theorem as machinery.
contradicts
The paper's claim conflicts with a theorem or certificate in the canon.
unclear
Pith found a possible connection, but the passage is too broad, indirect, or ambiguous to say the theorem truly supports the claim.

Reference graph

Works this paper leans on

31 extracted references · 31 canonical work pages · 1 internal anchor

  1. [1]

    Anderson, J., & King, I. R. 2000, PASP, 112, 1360

  2. [2]

    2014, JInst, 9, C03048

    Antilogus, P., Astier, P., Doherty, P., Guyonnet, A., & Regnault, N. 2014, JInst, 9, C03048

  3. [3]

    2017, RAA, 17, 091

    Bai, Z.-R., Zhang, H.-T., Yuan, H.-L., et al. 2017, RAA, 17, 091

  4. [4]

    2021, RAA, 21, 249

    Bai, Z.-R., Zhang, H.-T., Yuan, H.-L., et al. 2021, RAA, 21, 249

  5. [5]

    2025, MNRAS, 542, 608

    Berlfein, F., Mandelbaum, R., Li, X., et al. 2025, MNRAS, 542, 608

  6. [6]

    R., Bershady, M

    Blanton, M. R., Bershady, M. A., Abolfathi, B., et al. 2017, AJ, 154, 28

  7. [7]

    A., Law, D

    Bundy, K., Bershady, M. A., Law, D. R., et al. 2015, ApJ, 798, 7

  8. [8]

    P., Marcy, G

    Butler, R. P., Marcy, G. W., Williams, E., et al. 1996, PASP, 108, 500

  9. [9]

    A., Savage, B

    Cardelli, J. A., Savage, B. D., & Ebbets, D. C. 1990, ApJ, 365, 789

  10. [10]

    J., Jernigan, J

    Chang, C., Marshall, P. J., Jernigan, J. G., et al. 2012, MNRAS, 427, 2572

  11. [11]

    2012, RAA, 12, 1197 De Vries, W

    Cui, X.-Q., Zhao, Y.-H., Chu, Y.-Q., et al. 2012, RAA, 12, 1197 De Vries, W. H., Olivier, S. S., Asztalos, S. J., Rosenberg, L. J., &

  12. [12]

    Baker, K. L. 2007, ApJ, 662, 744 Gaia Collaboration, Vallenari, A., Brown, A. G. A., et al. 2023, A&A, 674, A1

  13. [13]

    2020, arXiv:2005.07864

    Hao, Z., Ye, H., Han, J., et al. 2020, arXiv:2005.07864

  14. [14]

    2020, PASJ, 72, 93

    Hirano, T., Kuzuhara, M., Kotani, T., et al. 2020, PASJ, 72, 93

  15. [15]

    2013, A&A, 553, A6

    Husser, T.-O., Wende-von Berg, S., Dreizler, S., et al. 2013, A&A, 553, A6

  16. [16]

    2002, PASJ, 54, 1001

    Kambe, E., Sato, B., Takeda, Y., et al. 2002, PASJ, 54, 1001

  17. [17]

    Adam: A Method for Stochastic Optimization

    Kingma, D. P., & Ba, J. 2015, arXiv:1412.6980 Lançon, A., Gonneau, A., Verro, K., et al. 2021, A&A, 649, A97

  18. [18]

    R., Westfall, K

    Law, D. R., Westfall, K. B., Bershady, M. A., et al. 2021, AJ, 161, 52

  19. [19]

    J., Fendel, P., et al

    Li, C.-H., Benedick, A. J., Fendel, P., et al. 2008, Natur, 452, 610

  20. [20]

    I., Starck, J.-L., & Kilbinger, M

    Liaudat, T. I., Starck, J.-L., & Kilbinger, M. 2023, FrASS, 10, 1158213

  21. [21]

    A., Hernández, J., et al

    Lindegren, L., Klioner, S. A., Hernández, J., et al. 2021, A&A, 649, A2

  22. [22]

    2019, arXiv:1908.03265

    Liu, L., Jiang, H., He, P., et al. 2019, arXiv:1908.03265

  23. [23]

    2026, PyLAMOSTIP: Python Package for Characterizing the Instrumental Profile of LAMOST, v1.0.0, Zenodo, doi:10.5281/zenodo.18667269

    Liu, Q., Bai, Z., Zhou, M., et al. 2026, PyLAMOSTIP: Python Package for Characterizing the Instrumental Profile of LAMOST, v1.0.0, Zenodo, doi:10.5281/zenodo.18667269

  24. [24]

    W., & Butler, R

    Marcy, G. W., & Butler, R. P. 1992, PASP, 104, 270 Milaković, D., & Jethwa, P. 2024, A&A, 684, A38

  25. [25]

    A., Shapiro, C., Kannawadi, A., et al

    Plazas, A. A., Shapiro, C., Kannawadi, A., et al. 2016, PASP, 128, 104001

  26. [26]

    M., & Bouchy, F

    Schmidt, T. M., & Bouchy, F. 2024, MNRAS, 530, 1252

  27. [27]

    M., Molaro, P., Murphy, M

    Schmidt, T. M., Molaro, P., Murphy, M. T., et al. 2021, A&A, 646, A144

  28. [28]

    A., Gunn, J

    Smee, S. A., Gunn, J. E., Uomoto, A., et al. 2013, AJ, 146, 32

  29. [29]

    2019, ApJ, 879, 69

    Ting, Y.-S., Conroy, C., Rix, H.-W., & Cargile, P. 2019, ApJ, 879, 69

  30. [30]

    A., Butler, R

    Valenti, J. A., Butler, R. P., & Marcy, G. W. 1995, PASP, 107, 966

  31. [31]

    2019, MNRAS, 482, 1406 11 The Astronomical Journa l, 171:256 (11pp), 2026 April Liu et al

    Zhao, F., Zhao, G., Liu, Y., et al. 2019, MNRAS, 482, 1406 11 The Astronomical Journa l, 171:256 (11pp), 2026 April Liu et al