pith. machine review for the scientific record. sign in

arxiv: 2604.16793 · v1 · submitted 2026-04-18 · 🌌 astro-ph.IM · cs.CV

Recognition: unknown

AstroSURE: Learning to Remove Noise from Astronomical Images Without Ground Truth Data

Authors on Pith no claims yet

Pith reviewed 2026-05-10 07:25 UTC · model grok-4.3

classification 🌌 astro-ph.IM cs.CV
keywords unsupervised denoisingastronomical imagingfaint source detectionNoise2NoiseHSTCFHTobject detection
0
0 comments X

The pith

Unsupervised denoising can improve faint source detection in astronomical images without clean ground truth data.

A machine-rendered reading of the paper's core claim, the machinery that carries it, and where it could break.

The paper tests whether deep learning methods for removing noise from astronomical images can be trained using only noisy observations and still help detect faint sources better than the raw data. It adapts Noise2Noise, Stein's Unbiased Risk Estimator, and blind-spot networks and evaluates them on synthetic data plus real images from the Hubble Space Telescope and Canada-France-Hawaii Telescope. Performance is judged mainly by object detection rates rather than image quality scores alone. Results show gains in correct detections of faint objects, especially when training and test data come from similar instruments and domains, while cross-telescope transfer works less well.

Core claim

Unsupervised deep denoising methods trained without ground-truth clean images can raise the correct detection rate of faint astronomical sources relative to the original noisy frames, with stronger results on HST data after domain-consistent initialization and more limited gains when applied to CFHT observations.

What carries the argument

Unsupervised training losses (Noise2Noise, Stein's Unbiased Risk Estimator, and blind-spot masking) that allow denoising networks to learn from noisy astronomical images alone.

If this is right

  • Object detection pipelines gain sensitivity to faint sources by adding an unsupervised denoising stage before cataloging.
  • Performance improvements require close matching of noise statistics between training and application data.
  • Synthetic noise models can substitute for real paired clean data when evaluating these methods.

Where Pith is reading between the lines

These are editorial extensions of the paper, not claims the author makes directly.

  • Instrument-specific retraining or fine-tuning may be needed for reliable results across different telescopes.
  • The same unsupervised pipeline could be tested on other large surveys to check whether detection gains generalize beyond HST and CFHT.
  • Downstream tasks such as photometry or morphological classification might also benefit if the denoised images preserve flux and shape information.

Load-bearing premise

The denoising step does not introduce artifacts that increase false detections or hide real faint sources in downstream object detection.

What would settle it

A controlled comparison of source catalogs extracted from the same set of real telescope images both before and after denoising, using an independent verification of true faint sources to measure net change in true-positive and false-positive rates.

Figures

Figures reproduced from arXiv: 2604.16793 by Omid Vaheb, Sebastien Fabbro, Stark Draper.

Figure 1
Figure 1. Figure 1: Illustrative characteristics of a MegaPrime exposure. (a) The full dynamic range displayed with a logarithmic stretch, revealing both bright sources and faint nebulosity. (b) A zoomed-in region highlighting faint sources near the background noise level, shown with a linear stretch chosen to emphasize noise properties. (c) Histogram of pixel intensities, showing the dominant background peak and the extended… view at source ↗
Figure 2
Figure 2. Figure 2: Architecture of the modified U-Net The workflow in this study is as follow: 1. Data Preparation: Full-frame astronomical images are divided into non-overlapping 256 × 256 pixel patches. Preprocessing and scaling are applied as specified for each experiment. 2. Training: The network is trained patch-wise using the selected loss function. For SURE-based training, the required noise parameters are estimated f… view at source ↗
Figure 3
Figure 3. Figure 3: Reconstruction result and PSNR of a noisy simulated image; U-Net 1 and 2 are modified U-Nets with Upsampling layers trained using Noise2Clean and Noise2Noise training schemes, respectively. 5.2. Effectiveness of Training Schemes In this set of experiments, we compare denoisers trained without clean targets, using the selected architecture and pre-processing pipeline. These experiments are intended to asses… view at source ↗
Figure 4
Figure 4. Figure 4: Validation PSNR during training of the modified U-Net trained with Noise2Clean and Noise2Noise settings; The Y-axis of the plot is in dB [PITH_FULL_IMAGE:figures/full_fig_p013_4.png] view at source ↗
Figure 5
Figure 5. Figure 5: Denoised output of denoisers given the same simulated image contaminated with the simple noise model The final experiment done on the simulated data is to test the effect of the detection parameters. The threshold scalar used for the SExtractor’s detection is σ = 3 for the noisy frames and σ = 1.5 for the denoised images previously shown. This parameter is multiplied by the estimate of the background root … view at source ↗
Figure 6
Figure 6. Figure 6: Error maps showing the difference between the denoised outputs of each image and the ground truth target; The image is simulated and contaminated with the simple noise model. The average false alarm rate of detections on the noisy frame is considerably higher than the denoised images, confirming that the denoised frames offer a more balanced trade-off between correct detection and false alarm rates. The hi… view at source ↗
Figure 7
Figure 7. Figure 7: Correct Detection Rate vs. False Alarm Rate comparison of different training strategies; Subfigure (b) shows the selected range shown in subfigure (a) by the yellow rectangle. We next evaluate the AstroSURE workflow on two real observational datasets, one from HST and one from CFHT. In contrast to the synthetic experiments, no clean target is available for these data. We therefore rely primarily on catalog… view at source ↗
Figure 8
Figure 8. Figure 8: Overview of the noise formation model synthesized with Galsim B. IMPACT OF PRE-PROCESSING In this experiment, we examine how different scaling methods of the range of the data and different preprocessing affect the training process [PITH_FULL_IMAGE:figures/full_fig_p021_8.png] view at source ↗
Figure 9
Figure 9. Figure 9: Comparison of different denoisers in object detection on a simulated image contaminated with the simple noise model. Blue ellipses show the detections that were successfully associated with a reference object. The pink ellipses visualize the objects missed in the detection. The red ellipses, mainly visible in the noisy frame, show the false alarm detection, which the cross-matching algorithm could not asso… view at source ↗
Figure 10
Figure 10. Figure 10: Training loss curve during training of the modified U-Net using different training settings; (a), (b), and (c) depict the L1 training loss used during training. (d) shows the L2 training loss on a logarithmic scale during training REFERENCES Astropy Collaboration, Robitaille, T. P., Tollerud, E. J., et al. 2013, A&A, 558, A33, doi: 10.1051/0004-6361/201322068 Astropy Collaboration, Price-Whelan, A. M., Si… view at source ↗
read the original abstract

In astronomical imaging, the low photon count of exposures necessitates extensive post-processing steps, including contamination removal and denoising. This paper evaluates deep-learning denoising methods that can be trained without clean ground-truth images and assesses their utility for detection11 oriented analysis of astronomical data. We adapt and compare Noise2Noise, Stein's Unbiased Risk Estimator, and blind-spot-based methods using synthetic data and real observations from the Hubble Space Telescope (HST) and the Canada-France-Hawaii Telescope (CFHT). Performance is evaluated using object-detection metrics, including correct detection rate and false alarm rate, together with image-based metrics and pixel-distribution diagnostics. The results show that these methods can improve faint-source detectability relative to the original noisy images, with encouraging gains on HST data after domain-consistent initialization, while transfer to CFHT data is more limited, highlighting the importance of instrument/domain similarity for unsupervised adaptation.

Editorial analysis

A structured set of objections, weighed in public.

Desk editor's note, referee report, simulated authors' rebuttal, and a circularity audit. Tearing a paper down is the easy half of reading it; the pith above is the substance, this is the friction.

Referee Report

2 major / 1 minor

Summary. The manuscript evaluates unsupervised deep-learning denoising methods (Noise2Noise, Stein's Unbiased Risk Estimator, and blind-spot-based approaches) for astronomical images without requiring clean ground-truth data. It adapts and compares these techniques on synthetic data as well as real observations from HST and CFHT, assessing performance through object-detection metrics (correct detection rate, false alarm rate), image-based metrics, and pixel-distribution diagnostics. The central claim is that the methods improve faint-source detectability relative to the original noisy images, with encouraging gains on HST data after domain-consistent initialization but more limited transfer to CFHT data.

Significance. If the detection improvements prove robust under proper validation, the work could aid processing of large surveys where ground-truth references are unavailable. Strengths include the focus on downstream detection tasks rather than purely image-quality metrics and the explicit comparison of multiple unsupervised methods on real telescope data.

major comments (2)
  1. [Abstract] Abstract and evaluation on real data: The claim that the methods 'improve faint-source detectability' with 'encouraging gains on HST data' is presented without any numerical values, error bars, or statistical significance tests for the correct detection rate and false alarm rate. This leaves the central empirical claim without verifiable quantitative support.
  2. [Results on real observations] Evaluation on real HST/CFHT data: Correct detection and false-alarm rates are computed by matching detections to external catalogs or deeper exposures, yet the manuscript provides no description of controls (e.g., source-injection tests on the original noisy frames or checks for introduced correlated artifacts) to confirm that denoising does not silently suppress real faint sources while only reducing noise-triggered false alarms.
minor comments (1)
  1. [Abstract] Abstract: 'detection11 oriented' appears to be a typographical error and should read 'detection-oriented'.

Simulated Author's Rebuttal

2 responses · 0 unresolved

We thank the referee for their constructive comments, which help clarify the presentation of our quantitative results and the robustness of our real-data evaluation. We respond to each major comment below and indicate the revisions planned for the manuscript.

read point-by-point responses
  1. Referee: [Abstract] Abstract and evaluation on real data: The claim that the methods 'improve faint-source detectability' with 'encouraging gains on HST data' is presented without any numerical values, error bars, or statistical significance tests for the correct detection rate and false alarm rate. This leaves the central empirical claim without verifiable quantitative support.

    Authors: We agree that the abstract would be strengthened by explicit quantitative support. In the revised manuscript we will update the abstract to report the specific improvements in correct detection rate (e.g., the percentage gain observed on HST fields) and the corresponding change in false-alarm rate, together with a brief reference to the error bars and statistical tests already computed in Section 4. This change directly addresses the lack of verifiable numbers while remaining faithful to the results presented in the body of the paper. revision: yes

  2. Referee: [Results on real observations] Evaluation on real HST/CFHT data: Correct detection and false-alarm rates are computed by matching detections to external catalogs or deeper exposures, yet the manuscript provides no description of controls (e.g., source-injection tests on the original noisy frames or checks for introduced correlated artifacts) to confirm that denoising does not silently suppress real faint sources while only reducing noise-triggered false alarms.

    Authors: We acknowledge that an explicit description of controls would increase confidence in the real-data results. Our current evaluation already uses deeper exposures and external catalogs as an independent reference to verify that newly detected sources are real rather than artifacts; however, we did not include source-injection tests or a dedicated artifact analysis. In the revised manuscript we will add a short subsection under 'Results on real observations' that (i) explains how the catalog-matching procedure serves as a control against source suppression and (ii) reports any additional checks for correlated artifacts that can be performed on the existing data. If the referee considers source-injection experiments essential, we are prepared to conduct a limited set on the HST fields and include the outcomes. revision: partial

Circularity Check

0 steps flagged

No significant circularity in empirical evaluation

full rationale

The paper adapts standard unsupervised denoising methods (Noise2Noise, SURE, blind-spot) and reports empirical gains on synthetic data plus real HST/CFHT observations via object-detection metrics and external catalog matching. No load-bearing step reduces by construction to a self-definition, fitted parameter renamed as prediction, or self-citation chain; claims rest on independent benchmarks and standard metrics rather than internal tautologies.

Axiom & Free-Parameter Ledger

0 free parameters · 1 axioms · 0 invented entities

The central claim depends on the assumption that unsupervised denoising techniques transfer effectively to real astronomical noise without ground truth, plus standard machine learning assumptions about data distributions.

axioms (1)
  • domain assumption Unsupervised denoising methods such as Noise2Noise can be trained effectively without clean ground-truth images
    This is the core premise enabling the adaptation and comparison of the listed methods to astronomical data.

pith-pipeline@v0.9.0 · 5454 in / 1187 out tokens · 58017 ms · 2026-05-10T07:25:46.580325+00:00 · methodology

discussion (0)

Sign in with ORCID, Apple, or X to comment. Anyone can read and Pith papers without signing in.

Reference graph

Works this paper leans on

63 extracted references · 15 canonical work pages · 2 internal anchors

  1. [1]

    1999, XSPEC: An X-ray spectral fitting package,, Astrophysics Source Code Library, record ascl:9910.005 http://ascl.net/9910.005 Astropy Collaboration, Robitaille, T

    Astropy Collaboration, Robitaille, T. P., Tollerud, E. J., et al. 2013, A&A, 558, A33, doi: 10.1051/0004-6361/201322068 Astropy Collaboration, Price-Whelan, A. M., Sip˝ ocz, B. M., et al. 2018, AJ, 156, 123, doi: 10.3847/1538-3881/aabc4f

  2. [2]

    J., Benoit, D

    Bartlett, O. J., Benoit, D. M., Pimbblet, K. A., Simmons, B., & Hunt, L. 2023, Monthly Notices of the Royal Astronomical Society, 521, 6318

  3. [3]

    D., de la Cruz Rodriguez, J., & Danilovic, S

    Baso, C. D., de la Cruz Rodriguez, J., & Danilovic, S. 2019, Astronomy & Astrophysics, 629, A99

  4. [4]

    2019, in International Conference on Machine Learning, PMLR, 524–533

    Batson, J., & Royer, L. 2019, in International Conference on Machine Learning, PMLR, 524–533

  5. [5]

    , keywords =

    Bertin, E., & Arnouts, S. 1996, A&AS, 117, 393, doi: 10.1051/aas:1996164

  6. [6]

    2002, in Astronomical Data Analysis Software and Systems XI, Vol

    Bertin, E., Mellier, Y., Radovich, M., et al. 2002, in Astronomical Data Analysis Software and Systems XI, Vol. 281, 228

  7. [7]

    2018, Astronomy & Astrophysics, 616, A1

    Brown, A., Vallenari, A., Prusti, T., et al. 2018, Astronomy & Astrophysics, 616, A1

  8. [8]

    2018, in Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, 3291–3300

    Chen, C., Chen, Q., Xu, J., & Koltun, V. 2018, in Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, 3291–3300

  9. [9]

    Chen, D., Tachella, J., & Davies, M. E. 2021, in Proceedings of the IEEE/CVF International Conference on Computer Vision, 4379–4388

  10. [10]

    Chen, D., Tachella, J., & Davies, M. E. 2022, in Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 5647–5656 AstroSURE:‌Learning to Remove Noise Without Ground Truth25

  11. [11]

    2013, The Astrophysical Journal Letters, 775, L8

    Chen, Y.-T., Kavelaars, J., Gwyn, S., et al. 2013, The Astrophysical Journal Letters, 775, L8

  12. [12]

    2024, Astronomy & Astrophysics, 683, A105, doi: 10.1051/0004-6361/202347948

    Drozdova, M., Kinakh, V., Bait, O., et al. 2024, Astronomy & Astrophysics, 683, A105, doi: 10.1051/0004-6361/202347948

  13. [13]

    Eldar, Y. C. 2008, IEEE Transactions on Signal Processing, 57, 471

  14. [14]

    A., Ghoniemy, T

    Elhakiem, A. A., Ghoniemy, T. E., & Salama, G. I. 2021, in 2021 Tenth International Conference on Intelligent Computing and Information Systems (ICICIS), IEEE, 51–56

  15. [15]

    2010, in Software and Cyberinfrastructure for Astronomy, Vol

    Gaudet, S., Hill, N., Armstrong, P., et al. 2010, in Software and Cyberinfrastructure for Astronomy, Vol. 7740, SPIE, 577–586

  16. [16]

    2026, Science, doi: 10.1126/science.ady9404 H¨ außler, B., Vika, M., Bamford, S

    Guo, Y., Zhang, H., Li, M., et al. 2026, Science, doi: 10.1126/science.ady9404 H¨ außler, B., Vika, M., Bamford, S. P., et al. 2022, A&A, 664, A92, doi: 10.1051/0004-6361/202142935

  17. [17]

    J., & Nakarmi, D

    Khan, R., Gauch, D. J., & Nakarmi, D. U. 2024, arXiv preprint arXiv:2407.15799

  18. [18]

    Adam: A Method for Stochastic Optimization

    Kingma, D. P., & Ba, J. 2014, arXiv preprint arXiv:1412.6980

  19. [19]

    M., Aussel, H., Calzetti, D., et al

    Koekemoer, A. M., Aussel, H., Calzetti, D., et al. 2007, The Astrophysical Journal Supplement Series, 172, 196

  20. [20]

    Krizhevsky, A., Sutskever, I., & Hinton, G. E. 2012, Advances in Neural Information Processing Systems, 25

  21. [21]

    2019, in Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 2129–2137

    Krull, A., Buchholz, T.-O., & Jug, F. 2019, in Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 2129–2137

  22. [22]

    1998, Proceedings of the IEEE, 86, 2278

    LeCun, Y., Bottou, L., Bengio, Y., & Haffner, P. 1998, Proceedings of the IEEE, 86, 2278

  23. [23]

    Noise2Noise: Learning Image Restoration without Clean Data

    Lehtinen, J., Munkberg, J., Hasselgren, J., et al. 1803, arXiv preprint arXiv:1803.04189

  24. [24]
  25. [25]

    Liu, T., Quan, Y., Su, Y., et al. 2023 —. 2025, Nature Astronomy, 9, 608–615, doi: 10.1038/s41550-025-02484-z

  26. [26]

    2010, IEEE Transactions on Image Processing, 20, 696

    Luisier, F., Blu, T., & Unser, M. 2010, IEEE Transactions on Image Processing, 20, 696

  27. [27]

    2023, in Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 14018–14027

    Mansour, Y., & Heckel, R. 2023, in Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 14018–14027

  28. [28]

    2022, arXiv preprint arXiv:2210.05553

    Marcos-Morales, A., Leibovich, M., Mohan, S., et al. 2022, arXiv preprint arXiv:2210.05553

  29. [29]

    M., Marinoni, S., Fabrizio, M., & Altavilla, G

    Marrese, P. M., Marinoni, S., Fabrizio, M., & Altavilla, G. 2019, Astronomy & Astrophysics, 621, A144

  30. [30]

    Mittal, A., Soundararajan, R., & Bovik, A. C. 2012, IEEE Signal Processing Letters, 20, 209

  31. [31]

    Radio-Interferometric Image Reconstruction with Denoising Diffusion Restoration Models

    Morales, M., Tolley, E., & Poitevineau, R. 2026, Radio-Interferometric Image Reconstruction with Denoising Diffusion Restoration Models. https://arxiv.org/abs/2601.15844

  32. [32]

    2020, in Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 12064–12072

    Moran, N., Schmidt, D., Zhong, Y., & Coady, P. 2020, in Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 12064–12072

  33. [33]

    2025, The Astrophysical Journal, 990, 122, doi: 10.3847/1538-4357/adf223

    Ni, S., Qiu, Y., Chen, Y., et al. 2025, The Astrophysical Journal, 990, 122, doi: 10.3847/1538-4357/adf223

  34. [34]

    2020, Astronomy & Astrophysics, 634, A48

    Paillassa, M., Bertin, E., & Bouy, H. 2020, Astronomy & Astrophysics, 634, A48

  35. [35]

    2013, The Astrophysical Journal, 767, 133

    Paudel, S., Duc, P.-A., Cˆ ot´ e, P., et al. 2013, The Astrophysical Journal, 767, 133

  36. [36]

    M., Sip˝ ocz, B., G¨ unther, H., et al

    Price-Whelan, A. M., Sip˝ ocz, B., G¨ unther, H., et al. 2018, The Astronomical Journal, 156, 123

  37. [37]

    M., Lim, P

    Price-Whelan, A. M., Lim, P. L., Earl, N., et al. 2022, The Astrophysical Journal, 935, 167

  38. [38]

    G., et al

    Prusti, T., De Bruijne, J., Brown, A. G., et al. 2016, Astronomy & Astrophysics, 595, A1

  39. [39]

    2022, in 2022 China Automation Congress (CAC), IEEE, 1901–1905

    Qi, J., Chen, M., Wu, Z., et al. 2022, in 2022 China Automation Congress (CAC), IEEE, 1901–1905

  40. [40]

    2020, in Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 1890–1898

    Quan, Y., Chen, M., Pang, T., & Ji, H. 2020, in Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 1890–1898

  41. [41]

    2014, The Astrophysical Journal, 797, 102

    Raichoor, A., Mei, S., Erben, T., et al. 2014, The Astrophysical Journal, 797, 102

  42. [42]

    2008, IEEE Transactions on Image Processing, 17, 1540

    Ramani, S., Blu, T., & Unser, M. 2008, IEEE Transactions on Image Processing, 17, 1540

  43. [43]

    Raphan, M., & Simoncelli, E. P. 2011, Neural Computation, 23, 374

  44. [44]

    On the Convergence of Adam and Beyond

    Reddi, S. J., Kale, S., & Kumar, S. 2019, arXiv preprint arXiv:1904.09237

  45. [45]

    P., Tollerud, E

    Robitaille, T. P., Tollerud, E. J., Greenfield, P., et al. 2013, Astronomy & Astrophysics, 558, A33

  46. [46]

    Ronneberger, O., Fischer, P., & Brox, T. 2015, in Medical image computing and computer-assisted intervention–MICCAI 2015: 18th international conference, Munich, Germany, October 5-9, 2015, proceedings, part III 18, Springer, 234–241

  47. [47]

    2020, Astronomy & Astrophysics, 643, A43

    Roscani, V., Tozza, S., Castellano, M., et al. 2020, Astronomy & Astrophysics, 643, A43

  48. [48]

    T., Jarvis, M., Mandelbaum, R., et al

    Rowe, B. T., Jarvis, M., Mandelbaum, R., et al. 2015, Astronomy and Computing, 10, 121

  49. [49]

    M., & Ghafoor, A

    Shamshad, F., Riaz, M. M., & Ghafoor, A. 2018, Advances in Astronomy, 2018, 2417939 26Vaheb et al

  50. [50]

    Soltanayev, S., & Chun, S. Y. 2018, Advances in Neural Information Processing Systems, 31

  51. [51]

    Stein, C. M. 1981, The Annals of Statistics, 1135

  52. [52]

    2025, Astronomy and Computing, 53, 100999, doi: 10.1016/j.ascom.2025.100999

    Sukurdeep, Y., Navarro, F., & Budav´ ari, T. 2025, Astronomy and Computing, 53, 100999, doi: 10.1016/j.ascom.2025.100999

  53. [53]

    2022, Advances in Neural Information Processing Systems, 35, 4983

    Tachella, J., Chen, D., & Davies, M. 2022, Advances in Neural Information Processing Systems, 35, 4983

  54. [54]

    Taylor, M. B. 2005, in Astronomical Data Analysis Software and Systems XIV, Vol. 347, 29

  55. [55]

    Unit, C. A. S. 2024, Software Release, http: //casu.ast.cam.ac.uk/surveys-projects/software-release

  56. [56]

    2021, Monthly Notices of the Royal Astronomical Society, 503, 3204

    Vojtekova, A., Lieu, M., Valtchanov, I., et al. 2021, Monthly Notices of the Royal Astronomical Society, 503, 3204

  57. [57]

    2020, in Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 2758–2767

    Wei, K., Fu, Y., Yang, J., & Huang, H. 2020, in Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 2758–2767

  58. [58]

    2020, Advances in Neural Information Processing Systems, 33, 20320

    Xie, Y., Wang, Z., & Ji, S. 2020, Advances in Neural Information Processing Systems, 33, 20320

  59. [59]

    Yan, Z., & Sanders, J. L. 2025, Monthly Notices of the Royal Astronomical Society, 540, 2289, doi: 10.1093/mnras/staf852

  60. [60]

    W., Arora, A., Khan, S., et al

    Zamir, S. W., Arora, A., Khan, S., et al. 2022, in Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 5728–5739

  61. [61]

    2017, IEEE Transactions on Image Processing, 26, 3142

    Zhang, K., Zuo, W., Chen, Y., Meng, D., & Zhang, L. 2017, IEEE Transactions on Image Processing, 26, 3142

  62. [62]

    2022, Research Notes of the AAS, 6, 187

    Zhang, Y., Nord, B., Pagul, A., & Lepori, M. 2022, Research Notes of the AAS, 6, 187

  63. [63]

    2014, The Astrophysical Journal, 792, 59

    Zhu, L., Long, R., Mao, S., et al. 2014, The Astrophysical Journal, 792, 59