Recognition: 2 theorem links
· Lean TheoremDeploying Self-Supervised Learning for Real Seismic Data Denoising
Pith reviewed 2026-05-13 00:52 UTC · model grok-4.3
The pith
Adding real noise to noisy seismic inputs enables effective self-supervised denoising without clean references.
A machine-rendered reading of the paper's core claim, the machinery that carries it, and where it could break.
Core claim
The central claim is that the Noisy-as-Clean SSL method, by adding real noise from the acquisition to the noisy input to form training pairs, delivers a feasible denoising solution for real seismic data. Across the controlled experiments, this approach outperforms training with synthetic additive white Gaussian noise, shows sensitivity to data characteristics and noise levels, and benefits from self-supervised fine-tuning on unseen test data, while remaining independent of the specific network architecture used.
What carries the argument
The Noisy-as-Clean (NaC) mechanism, which treats the observed noisy seismic trace as the target and adds controlled real noise extracted from the same acquisition to create training input-target pairs for a neural network denoiser.
If this is right
- Matching the statistical properties of injected noise to the actual field noise is required for the method to succeed on seismic data.
- Self-supervised models gain from fine-tuning directly on the target test set, whereas supervised models show no such gain.
- Both the underlying seismic signal properties and the noise amplitude level affect how well any trained model performs.
- The same network topology works for both the self-supervised and supervised comparisons, indicating the benefit is not tied to architecture choice.
Where Pith is reading between the lines
- The method could lower the barrier to denoising in field settings where collecting paired clean data is expensive or impossible.
- Similar noise-injection strategies might apply to other geophysical or time-series signals that lack clean references but have repeatable noise statistics.
- Future checks could test whether the approach scales to 3D seismic volumes or to data with multiple overlapping noise sources.
Load-bearing premise
The filtered versions of the real seismic acquisitions serve as a sufficiently accurate stand-in for clean ground truth when scoring denoising performance.
What would settle it
Measuring denoising quality against independently recorded cleaner seismic traces or against synthetic data with known exact ground truth would show whether the reported gains hold under stricter reference conditions.
Figures
read the original abstract
Self-supervised learning (SSL) has emerged as a promising approach to seismic data denoising as it does not require clean reference data. In this work, the deployment of the Noisy-as-Clean (NaC) method was evaluated for real seismic data denoising under controlled conditions. Two independent seismic acquisitions, each comprising noisy and filtered data, were organized into four real datasets. The NaC SSL method was adapted to add real noise to the noisy input, controlled by a parameter. An experimental protocol with ten experiments was designed to compare different strategies for deploying the NaC SSL method with the supervised learning baseline, using identical network topology and hyperparameters. The models were evaluated in terms of denoising performance, computational cost, and generalization capability. The results show that the synthetic additive white Gaussian noise (AWGN) is inadequate for the denoising of seismic data within the NaC method, and performance strongly depends on the compatibility between the injected and actual noise characteristics. Furthermore, both the characteristics of the seismic data and the noise level influence the performance of the model. Self-supervised fine-tuning on test data has improved SSL performance, whereas no such gain was observed for fine-tuning of supervised models. Finally, NaC has shown to be a simple, effective, and model-independent method that offers a feasible solution for the denoising of real seismic data.
Editorial analysis
A structured set of objections, weighed in public.
Referee Report
Summary. The manuscript evaluates the Noisy-as-Clean (NaC) self-supervised learning approach for denoising real seismic data. Using two independent acquisitions that each provide noisy and filtered versions, the authors construct four datasets and run a ten-experiment protocol that compares NaC variants (including real-noise injection controlled by a parameter and self-supervised fine-tuning) against supervised baselines that share the same network architecture and hyperparameters. They report that additive white Gaussian noise is inadequate, that performance depends on noise-characteristic compatibility and data properties, that SSL fine-tuning improves results while supervised fine-tuning does not, and conclude that NaC is a simple, effective, and model-independent solution for real seismic denoising.
Significance. If the evaluation methodology is strengthened, the work would offer practical guidance on deploying SSL where clean labels are unavailable in geophysics, underscoring the necessity of matching injected noise statistics to real noise and the value of test-time self-supervised adaptation. The controlled real-data protocol and direct SSL-versus-supervised comparison are useful contributions.
major comments (2)
- [Abstract and experimental protocol] Abstract and experimental protocol: performance is measured exclusively against the filtered versions of the two acquisitions treated as clean ground truth. No section quantifies residual noise remaining in those filtered traces, assesses whether the filter distorts coherent events, or validates that the real noise added during NaC training statistically matches the unseen test noise. Because both SSL and supervised models optimize to the same imperfect reference, reported gains may reflect matching the filter rather than recovering true signal.
- [Results section] Results section: the abstract and protocol description state that ten experiments were performed and that NaC is effective, yet supply no quantitative metrics (e.g., SNR, MSE, or structural similarity values), error bars, or statistical tests. Without these, the magnitude of any advantage over baselines or the effect of fine-tuning cannot be verified.
minor comments (1)
- [Abstract] The abstract would be strengthened by including at least one key quantitative result (with uncertainty) to support the final claim that NaC is effective.
Simulated Author's Rebuttal
We thank the referee for the constructive comments, which highlight important aspects of our evaluation methodology. We address each major comment below and will revise the manuscript to incorporate the suggested improvements where feasible.
read point-by-point responses
-
Referee: [Abstract and experimental protocol] Abstract and experimental protocol: performance is measured exclusively against the filtered versions of the two acquisitions treated as clean ground truth. No section quantifies residual noise remaining in those filtered traces, assesses whether the filter distorts coherent events, or validates that the real noise added during NaC training statistically matches the unseen test noise. Because both SSL and supervised models optimize to the same imperfect reference, reported gains may reflect matching the filter rather than recovering true signal.
Authors: We acknowledge that the filtered versions are an approximation to clean ground truth, a standard practice in real seismic denoising where true clean references are unavailable. In the revised manuscript, we will add a new subsection discussing the filtering procedures used in the original acquisitions, their potential effects on coherent events, and the inherent limitations of this proxy. For the NaC noise injection, we will include statistical validations such as comparisons of power spectral densities and amplitude distributions between the injected real noise samples and the noise characteristics in the test sets to confirm compatibility. While absolute quantification of residual noise in the filtered traces is not possible without true clean data, the controlled protocol with independent acquisitions allows fair relative comparisons between SSL and supervised approaches, both evaluated against the same reference. This helps demonstrate that performance differences arise from the learning strategy rather than solely from filter matching. revision: partial
-
Referee: [Results section] Results section: the abstract and protocol description state that ten experiments were performed and that NaC is effective, yet supply no quantitative metrics (e.g., SNR, MSE, or structural similarity values), error bars, or statistical tests. Without these, the magnitude of any advantage over baselines or the effect of fine-tuning cannot be verified.
Authors: We apologize for not including the specific numerical results in the submitted manuscript. The ten experiments were evaluated using SNR, MSE, and structural similarity metrics. In the revision, we will add detailed tables reporting these quantitative values (including means and standard deviations across experiments), error bars, and appropriate statistical tests to clearly demonstrate the magnitude of improvements from the NaC variants and the differential effects of self-supervised versus supervised fine-tuning. revision: yes
Circularity Check
No circularity: purely empirical comparison with no derivations or self-referential reductions
full rationale
The manuscript describes an experimental protocol that adapts the existing NaC SSL method, trains models on real seismic acquisitions (noisy + filtered pairs), and reports performance metrics against the filtered versions as proxy ground truth. No equations, uniqueness theorems, or parameter-fitting steps are presented that reduce by construction to the inputs; the central claim rests on direct empirical comparisons between SSL and supervised baselines under controlled conditions. Self-citations, if present, are not load-bearing for any derivation. The evaluation protocol is self-contained and externally falsifiable via the reported metrics and datasets.
Axiom & Free-Parameter Ledger
Lean theorems connected to this paper
-
IndisputableMonolith/Cost/FunctionalEquation.leanwashburn_uniqueness_aczel unclear?
unclearRelation between the paper passage and the cited Recognition theorem.
The NaC SSL method was adapted to add real noise to the noisy input, controlled by a parameter... m=δRMS(x) ê
-
IndisputableMonolith/Foundation/RealityFromDistinction.leanreality_from_one_distinction unclear?
unclearRelation between the paper passage and the cited Recognition theorem.
NaC has shown to be a simple, effective, and model-independent method that offers a feasible solution for the denoising of real seismic data.
What do these tags mean?
- matches
- The paper's claim is directly supported by a theorem in the formal canon.
- supports
- The theorem supports part of the paper's argument, but the paper may add assumptions or extra steps.
- extends
- The paper goes beyond the formal theorem; the theorem is a base layer rather than the whole result.
- uses
- The paper appears to rely on the theorem as machinery.
- contradicts
- The paper's claim conflicts with a theorem or certificate in the canon.
- unclear
- Pith found a possible connection, but the passage is too broad, indirect, or ambiguous to say the theorem truly supports the claim.
Reference graph
Works this paper leans on
-
[1]
F. K. Anjom, F. Vaccarino, and L. V . Socco. Machine learning for seismic exploration: Where are we and how far are we from the holy grail?Geophysics, 89:W A157–W A178, 2024
work page 2024
-
[2]
Unleashing the Power of Self-Supervised Image Denoising: A Comprehensive Review, March 2024
Dan Zhang, Fangfang Zhou, Felix Albu, Yuanzhou Wei, Xiao Yang, Yuan Gu, and Qiang Li. Unleashing the Power of Self-Supervised Image Denoising: A Comprehensive Review, March 2024. arXiv:2308.00247 [eess]
-
[3]
Noise2V oid - Learning Denoising From Single Noisy Images
Alexander Krull, Tim-Oliver Buchholz, and Florian Jug. Noise2V oid - Learning Denoising From Single Noisy Images. In2019 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), pages 2124–2132, June 2019. ISSN: 2575-7075
work page 2019
-
[4]
Dan Zhang and Fangfang Zhou. Self-supervised image denoising for real-world images with context-aware transformer.IEEE Access, 11:14340–14349, 2023
work page 2023
-
[5]
Noise2Noise: Learning Image Restoration without Clean Data
Jaakko Lehtinen, Jacob Munkberg, Jon Hasselgren, Samuli Laine, Tero Karras, Miika Aittala, and Timo Aila. Noise2Noise: Learning Image Restoration without Clean Data. InProceedings of the 35th International Confer- ence on Machine Learning, pages 2965–2974. PMLR, July 2018. ISSN: 2640-3498
work page 2018
-
[6]
Sixiu Liu, Claire Birnie, and Tariq Alkhalifah. Trace-wise coherent noise suppression via a self-supervised blind-trace deep-learning scheme.Geophysics, 88(6):V459–V472, November 2023
work page 2023
-
[7]
Jun Xu, Yuan Huang, Ming-Ming Cheng, Li Liu, Fan Zhu, Zhou Xu, and Ling Shao. Noisy-As-Clean: Learning Self-supervised Denoising from the Corrupted Image.IEEE Transactions on Image Processing, 29:9316–9329,
- [8]
-
[9]
High-Quality Self-Supervised Deep Image Denoising, October 2019
Samuli Laine, Tero Karras, Jaakko Lehtinen, and Timo Aila. High-Quality Self-Supervised Deep Image Denoising, October 2019. arXiv:1901.10277 [cs]
-
[10]
Noise2Self: Blind Denoising by Self-Supervision, June 2019
Joshua Batson and Loic Royer. Noise2Self: Blind Denoising by Self-Supervision, June 2019. arXiv:1901.11365 [cs]
-
[11]
Noise2Same: Optimizing A Self-Supervised Bound for Image Denoising, October 2020
Yaochen Xie, Zhengyang Wang, and Shuiwang Ji. Noise2Same: Optimizing A Self-Supervised Bound for Image Denoising, October 2020. arXiv:2010.11971 [cs]
-
[12]
Blind2Unblind: Self-Supervised Image Denoising with Visible Blind Spots
Zejin Wang, Jiazheng Liu, Guoqing Li, and Hua Han. Blind2Unblind: Self-Supervised Image Denoising with Visible Blind Spots. In2022 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), pages 2017–2026, June 2022. ISSN: 2575-7075
work page 2017
-
[13]
Probabilistic Noise2V oid: Unsupervised Content-Aware Denoising
Alexander Krull, Tomas Vicar, and Florian Jug. Probabilistic Noise2V oid: Unsupervised Content-Aware Denoising. Frontiers in Computer Science, 2:5, February 2020. arXiv:1906.00651 [eess]
-
[14]
Unpaired Learning of Deep Image Denoising, August 2020
Xiaohe Wu, Ming Liu, Yue Cao, Dongwei Ren, and Wangmeng Zuo. Unpaired Learning of Deep Image Denoising, August 2020. arXiv:2008.13711 [eess]
-
[15]
Self2Self With Dropout: Learning Self-Supervised Denoising From Single Image
Yuhui Quan, Mingqin Chen, Tongyao Pang, and Hui Ji. Self2Self With Dropout: Learning Self-Supervised Denoising From Single Image. In2020 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), pages 1887–1895, June 2020. ISSN: 2575-7075
work page 2020
-
[16]
Neighbor2neighbor: Self-supervised denoising from single noisy images
Tao Huang, Songjiang Li, Xu Jia, Huchuan Lu, and Jianzhuang Liu. Neighbor2neighbor: Self-supervised denoising from single noisy images. In2021 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), pages 14776–14785, 2021
work page 2021
-
[17]
A fast blind zero-shot denoiser.Nature Machine Intelligence, 4(11):953–963, November 2022
Jason Lequyer, Reuben Philip, Amit Sharma, Wen-Hsin Hsu, and Laurence Pelletier. A fast blind zero-shot denoiser.Nature Machine Intelligence, 4(11):953–963, November 2022. Publisher: Nature Publishing Group
work page 2022
-
[18]
Claire Birnie, Matteo Ravasi, Sixiu Liu, and Tariq Alkhalifah. The potential of self-supervised networks for random noise suppression in seismic data.Artificial Intelligence in Geosciences, 2:47–59, December 2021
work page 2021
-
[19]
Wenqian Fang, Lihua Fu, and Hongwei Li. Unsupervised CNN Based on Self-Similarity for Seismic Data Denoising.IEEE Geoscience and Remote Sensing Letters, 19:1–5, 2022
work page 2022
-
[20]
Detao Wang, Guoxiong Chen, Jianwei Chen, and Qiuming Cheng. Seismic Data Denoising Using a Self- Supervised Deep Learning Network.Mathematical Geosciences, 56(3):487–510, April 2024. 15 APREPRINT- MAY13, 2026
work page 2024
-
[21]
Naihao Liu, Jiale Wang, Jinghuai Gao, Shaojie Chang, and Yihuai Lou. Similarity-Informed Self-Learning and Its Application on Seismic Image Denoising.IEEE Transactions on Geoscience and Remote Sensing, 60:1–13, 2022
work page 2022
-
[22]
Catarina de Nazaré Pereira Pinheiro, Roosevelt de Lima Sardinha, Pablo Machado Barros, André Bulcão, Bruno Vieira Costa, and Alexandre Gonçalves Evsukoff. A Self-Supervised One-Shot Learning Approach for Seismic Noise Reduction.Applied Sciences, 14(21):9721, January 2024. Publisher: Multidisciplinary Digital Publishing Institute
work page 2024
-
[23]
Zitai Xu, Yisi Luo, Bangyu Wu, and Deyu Meng. S2S-WTV: Seismic Data Noise Attenuation Using Weighted Total Variation Regularized Self-Supervised Learning.IEEE Transactions on Geoscience and Remote Sensing, 61:1–15, 2023. arXiv:2212.13523 [eess]
-
[24]
Lei Gao, Housen Shen, and Fan Min. Swin transformer for simultaneous denoising and interpolation of seismic data.Computers & Geosciences, 183:105510, 2024
work page 2024
-
[25]
Mingwei Wang, Yong Li, Yingtian Liu, Junheng Peng, Huating Li, and Xiaowen Wang. A diffusion-hybrid frame- work with deformable convolution and multihead self-attention for seismic denoising.Geophysics, 90(6):V633– V644, October 2025
work page 2025
-
[26]
Noisier2Noise: Learning to Denoise From Unpaired Noisy Data
Nick Moran, Dan Schmidt, Yu Zhong, and Patrick Coady. Noisier2Noise: Learning to Denoise From Unpaired Noisy Data. In2020 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), pages 12061–12069, June 2020. ISSN: 2575-7075
work page 2020
-
[27]
Recorrupted-to-Recorrupted: Unsupervised Deep Learning for Image Denoising
Tongyao Pang, Huan Zheng, Yuhui Quan, and Hui Ji. Recorrupted-to-Recorrupted: Unsupervised Deep Learning for Image Denoising. pages 2043–2052, 2021
work page 2043
-
[28]
Zero-Shot Noise2Noise: Efficient Image Denoising without any Data
Youssef Mansour and Reinhard Heckel. Zero-Shot Noise2Noise: Efficient Image Denoising without any Data. In 2023 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), pages 14018–14027, June
work page 2023
-
[29]
Dan Shao, Yuxing Zhao, Yue Li, and Tonglin Li. Noisy2Noisy: Denoise Pre-Stack Seismic Data Without Paired Training Data With Labels.IEEE Geoscience and Remote Sensing Letters, 19:1–5, 2022
work page 2022
-
[30]
Y X Zhao, Y Li, N Wu, and S N Wang. Sample2Sample: an improved self-supervised denoising framework for random noise suppression in distributed acoustic sensing vertical seismic profile data.Geophysical Journal International, 232(3):1515–1532, March 2023
work page 2023
-
[31]
Mitsuyuki Ozawa. Enhancing seismic noise suppression using the Noise2Noise framework.Geophysics, 90(2):V97–V110, February 2025
work page 2025
-
[32]
Robust seismic data denoising via self-supervised deep learning.Geophysics, August 2024
Ji Li, Daniel Trad, and Dawei Liu. Robust seismic data denoising via self-supervised deep learning.Geophysics, August 2024. Publisher: Society of Exploration Geophysicists
work page 2024
-
[33]
An effective self-supervised learning method for various seismic noise attenuation, November 2023
Shijun Cheng, Zhiyao Cheng, Chao Jiang, Weijian Mao, and Qingchen Zhang. An effective self-supervised learning method for various seismic noise attenuation, November 2023. arXiv:2311.02193 [physics]
-
[34]
Iterative data refinement for self-supervised mr image reconstruction, 2022
Xue Liu, Juan Zou, Xiawu Zheng, Cheng Li, Hairong Zheng, and Shanshan Wang. Iterative data refinement for self-supervised mr image reconstruction, 2022
work page 2022
-
[35]
Pablo M. Barros, Roosevelt de L. Sardinha, Giovanny A. M. Arboleda, Lessandro de S. S. Valente, Isabelle R. V . de Melo, Albino Aveleda, André Bulcão, Sergio L. Netto, and Alexandre G. Evsukoff. A Real Benchmark Swell Noise Dataset for Performing Seismic Data Denoising via Deep Learning, October 2024. arXiv:2410.08231 [physics]
-
[36]
J. Long, E. Shelhamer, and T. Darrell. Fully convolutional networks for semantic segmentation. In2015 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), pages 3431–3440, 2015. 16
work page 2015
discussion (0)
Sign in with ORCID, Apple, or X to comment. Anyone can read and Pith papers without signing in.