pith. machine review for the scientific record. sign in

arxiv: 2604.06739 · v1 · submitted 2026-04-08 · 💻 cs.CV

Recognition: 2 theorem links

· Lean Theorem

DOC-GS: Dual-Domain Observation and Calibration for Reliable Sparse-View Gaussian Splatting

Authors on Pith no claims yet

Pith reviewed 2026-05-10 18:59 UTC · model grok-4.3

classification 💻 cs.CV
keywords 3D Gaussian SplattingSparse-view reconstructionGaussian primitive reliabilityDepth-guided dropoutDark channel priorArtifact mitigationGeometric pruning
0
0 comments X

The pith

The DOC-GS framework models Gaussian reliability through dual-domain signals to reduce artifacts in sparse-view 3D Gaussian Splatting.

A machine-rendered reading of the paper's core claim, the machinery that carries it, and where it could break.

Sparse-view 3D Gaussian Splatting often produces distorted structures and hazy artifacts because some Gaussian primitives lack sufficient constraints during optimization. The paper establishes that these unreliable primitives can be observed and corrected by combining an optimization-domain signal, where dropout probability in a depth-guided strategy indicates constraint level, with an observation-domain signal from dark channel prior that flags inconsistent regions across views. This synergy allows for pruning of low-reliability Gaussians, leading to more stable and accurate reconstructions. A reader would care as it addresses a key ill-posedness in using few images for 3D scene representation, potentially enabling better results in practical settings with limited capture.

Core claim

The central claim is that unreliable Gaussians, which are insufficiently constrained, accumulate as haze-like degradations, and can be mitigated by a unified framework using Continuous Depth-Guided Dropout to impose smooth depth-aware inductive bias in optimization and Dark Channel Prior to accumulate cross-view evidence for reliability-driven geometric pruning.

What carries the argument

The Dual-domain Observation and Calibration (DOC-GS) framework, specifically the Continuous Depth-Guided Dropout (CDGD) strategy as a proxy for primitive reliability and the Dark Channel Prior (DCP) for identifying anomalous regions.

If this is right

  • Suppressing weakly constrained Gaussians improves optimization stability in sparse-view training.
  • Pruning based on aggregated cross-view evidence removes floaters and reduces artifacts.
  • Rendered images exhibit fewer structural distortions and translucent haze.
  • Overall reconstruction quality increases without relying on purely heuristic regularization.

Where Pith is reading between the lines

These are editorial extensions of the paper, not claims the author makes directly.

  • Applying similar dual-domain ideas could help in other ill-posed reconstruction tasks like neural radiance fields.
  • Tracking reliability over training iterations might allow dynamic adjustment beyond fixed pruning.
  • This approach suggests that artifact formation has identifiable signatures that can be used for self-correction in rendering pipelines.

Load-bearing premise

Unreliable Gaussians are the main cause of artifacts, and the dropout probability plus dark channel prior accurately identify them across different scenes.

What would settle it

A test on benchmark datasets where applying the pruning and dropout does not lead to measurable improvements in PSNR, SSIM, or visual quality compared to baseline 3DGS.

Figures

Figures reproduced from arXiv: 2604.06739 by Debin Zhao, Hantang Li, Qiang Zhu, Xiandong Meng, Xiaopeng Fan.

Figure 1
Figure 1. Figure 1: Motivation of DOC-GS for sparse-view Gaussian splatting. (a) To explore the degradation characteristics of rendered [PITH_FULL_IMAGE:figures/full_fig_p001_1.png] view at source ↗
Figure 2
Figure 2. Figure 2: Overview of DOC-GS. Initial Gaussian primitives from sparse-view SfM are jointly optimized through two complemen [PITH_FULL_IMAGE:figures/full_fig_p004_2.png] view at source ↗
Figure 3
Figure 3. Figure 3: Qualitative comparison of our method and three existing methods on the LLFF [ [PITH_FULL_IMAGE:figures/full_fig_p007_3.png] view at source ↗
Figure 4
Figure 4. Figure 4: Qualitative comparison of our method and three existing methods on the MipNeRF360 [ [PITH_FULL_IMAGE:figures/full_fig_p008_4.png] view at source ↗
Figure 5
Figure 5. Figure 5: Compatibility to 3DGS variants on the LLFF dataset [PITH_FULL_IMAGE:figures/full_fig_p008_5.png] view at source ↗
read the original abstract

Sparse-view reconstruction with 3D Gaussian Splatting (3DGS) is fundamentally ill-posed due to insufficient geometric supervision, often leading to severe overfitting and the emergence of structural distortions and translucent haze-like artifacts. While existing approaches attempt to alleviate this issue via dropout-based regularization, they are largely heuristic and lack a unified understanding of artifact formation. In this paper, we revisit sparse-view 3DGS reconstruction from a new perspective and identify the core challenge as the unobservability of Gaussian primitive reliability. Unreliable Gaussians are insufficiently constrained during optimization and accumulate as haze-like degradations in rendered images. Motivated by this observation, we propose a unified Dual-domain Observation and Calibration (DOC-GS) framework that models and corrects Gaussian reliability through the synergy of optimization-domain inductive bias and observation-domain evidence. Specifically, in the optimization domain, we characterize Gaussian reliability by the degree to which each primitive is constrained during training, and instantiate this signal via a Continuous Depth-Guided Dropout (CDGD) strategy, where the dropout probability serves as an explicit proxy for primitive reliability. This imposes a smooth depth-aware inductive bias to suppress weakly constrained Gaussians and improve optimization stability. In the observation domain, we establish a connection between floater artifacts and atmospheric scattering, and leverage the Dark Channel Prior (DCP) as a structural consistency cue to identify and accumulate anomalous regions. Based on cross-view aggregated evidence, we further design a reliability-driven geometric pruning strategy to remove low-confidence Gaussians.

Editorial analysis

A structured set of objections, weighed in public.

Desk editor's note, referee report, simulated authors' rebuttal, and a circularity audit. Tearing a paper down is the easy half of reading it; the pith above is the substance, this is the friction.

Referee Report

2 major / 2 minor

Summary. The paper proposes DOC-GS, a Dual-Domain Observation and Calibration framework for sparse-view 3D Gaussian Splatting. It identifies unreliable Gaussians (insufficiently constrained during optimization) as the source of structural distortions and haze-like artifacts. The method combines an optimization-domain inductive bias via Continuous Depth-Guided Dropout (CDGD), where dropout probability proxies primitive reliability and imposes depth-aware regularization, with an observation-domain component that uses the Dark Channel Prior (DCP) to detect anomalous regions across views and applies reliability-driven geometric pruning to remove low-confidence Gaussians.

Significance. If the dual-domain reliability modeling holds, the work could meaningfully improve reconstruction stability in under-constrained sparse-view 3DGS settings, which remain a practical bottleneck. The explicit proxy formulation and cross-view aggregation offer a more unified treatment than prior heuristic dropout methods, with potential for broader adoption in real-world capture scenarios lacking dense views.

major comments (2)
  1. [Method (CDGD description)] The central claim that CDGD dropout probability serves as an explicit proxy for per-primitive constraint level (optimization domain) lacks direct empirical grounding. No correlation analysis is provided between dropout rates and optimization sensitivity measures such as per-Gaussian gradient norms, Hessian traces, or ablation on view count; without this, the mapping remains an untested modeling assumption that underpins the entire regularization strategy.
  2. [Observation-domain calibration and pruning] The observation-domain half rests on the unverified assumption that DCP-detected dark-channel anomalies reliably coincide with floater/haze artifacts rather than other rendering issues. The manuscript does not report precision-recall of the aggregated DCP masks against manually labeled floaters or controlled experiments isolating DCP's contribution from other priors, which is load-bearing for the pruning step's validity.
minor comments (2)
  1. [CDGD strategy] Notation for the continuous depth-guided dropout probability schedule is introduced without an explicit equation or pseudocode; adding a compact formulation (e.g., p(d) = f(depth, iteration)) would improve reproducibility.
  2. [Introduction] The abstract and introduction repeatedly use 'unreliable Gaussians' without an initial formal definition; a short paragraph early in the paper defining reliability in terms of constraint degree would aid readers.

Simulated Author's Rebuttal

2 responses · 0 unresolved

We thank the referee for the constructive feedback on our work. We address the two major comments point by point below, acknowledging where additional empirical support is warranted and outlining planned revisions.

read point-by-point responses
  1. Referee: [Method (CDGD description)] The central claim that CDGD dropout probability serves as an explicit proxy for per-primitive constraint level (optimization domain) lacks direct empirical grounding. No correlation analysis is provided between dropout rates and optimization sensitivity measures such as per-Gaussian gradient norms, Hessian traces, or ablation on view count; without this, the mapping remains an untested modeling assumption that underpins the entire regularization strategy.

    Authors: We agree that the manuscript would benefit from explicit empirical validation of the proxy relationship. The CDGD formulation is motivated by the observation that dropout probability directly encodes the degree of optimization constraint (higher probability for primitives receiving weaker supervision), but we did not include correlation studies against gradient norms, Hessian traces, or view-count ablations in the original submission. In the revision we will add a dedicated analysis subsection reporting Pearson correlations between per-Gaussian dropout rates and gradient-norm magnitudes across varying view counts, together with an ablation that varies the number of input views while tracking both dropout statistics and reconstruction metrics. revision: yes

  2. Referee: [Observation-domain calibration and pruning] The observation-domain half rests on the unverified assumption that DCP-detected dark-channel anomalies reliably coincide with floater/haze artifacts rather than other rendering issues. The manuscript does not report precision-recall of the aggregated DCP masks against manually labeled floaters or controlled experiments isolating DCP's contribution from other priors, which is load-bearing for the pruning step's validity.

    Authors: We acknowledge that quantitative validation of the DCP-to-artifact correspondence was not provided. While the dark-channel prior is a well-established cue for scattering-like degradations and we aggregate it across views to identify unreliable regions, the original manuscript lacks precision-recall figures against labeled floaters and isolated ablations of the DCP term. We will revise the observation-domain section to include (i) precision-recall evaluation of the aggregated DCP masks on a manually annotated subset of scenes and (ii) a controlled ablation that disables the DCP-driven pruning while retaining the optimization-domain component, thereby isolating its contribution. revision: yes

Circularity Check

0 steps flagged

No circularity: new inductive biases and external priors introduced without reduction to inputs

full rationale

The derivation introduces CDGD as a new strategy that defines dropout probability as a proxy for constraint level by construction of the method itself, not by fitting a parameter to data and then relabeling it a prediction. DCP is invoked as an established external prior for anomaly detection, with no self-citation chains or uniqueness theorems from the authors' prior work serving as load-bearing justification. No equations or claims reduce a target quantity to a fitted input or self-defined signal; the framework adds independent mechanisms for reliability modeling and pruning. This is the common case of a self-contained proposal.

Axiom & Free-Parameter Ledger

0 free parameters · 2 axioms · 0 invented entities

Based solely on the abstract, the central claim rests on domain assumptions about Gaussian constraint levels and artifact detection; no specific free parameters or invented entities are detailed in the provided text.

axioms (2)
  • domain assumption Dropout probability in a depth-guided strategy serves as an explicit proxy for the degree to which each Gaussian primitive is constrained during optimization.
    This underpins the CDGD component as an inductive bias for suppressing weakly constrained Gaussians.
  • domain assumption Floater artifacts can be connected to atmospheric scattering and identified using the Dark Channel Prior as a structural consistency cue across views.
    This underpins the observation-domain evidence and subsequent geometric pruning.

pith-pipeline@v0.9.0 · 5585 in / 1589 out tokens · 78062 ms · 2026-05-10T18:59:09.962601+00:00 · methodology

discussion (0)

Sign in with ORCID, Apple, or X to comment. Anyone can read and Pith papers without signing in.

Lean theorems connected to this paper

Citations machine-checked in the Pith Canon. Every link opens the source theorem in the public Lean library.

What do these tags mean?
matches
The paper's claim is directly supported by a theorem in the formal canon.
supports
The theorem supports part of the paper's argument, but the paper may add assumptions or extra steps.
extends
The paper goes beyond the formal theorem; the theorem is a base layer rather than the whole result.
uses
The paper appears to rely on the theorem as machinery.
contradicts
The paper's claim conflicts with a theorem or certificate in the canon.
unclear
Pith found a possible connection, but the passage is too broad, indirect, or ambiguous to say the theorem truly supports the claim.

Forward citations

Cited by 2 Pith papers

Reviewed papers in the Pith corpus that reference this work. Sorted by Pith novelty score.

  1. PairDropGS: Paired Dropout-Induced Consistency Regularization for Sparse-View Gaussian Splatting

    cs.CV 2026-05 unverdicted novelty 7.0

    PairDropGS applies paired dropout-induced low-frequency consistency regularization and progressive scheduling to improve stability and quality in sparse-view 3D Gaussian Splatting over prior dropout methods.

  2. PairDropGS: Paired Dropout-Induced Consistency Regularization for Sparse-View Gaussian Splatting

    cs.CV 2026-05 unverdicted novelty 6.0

    PairDropGS uses paired dropout with low-frequency consistency regularization and progressive scheduling to stabilize and improve sparse-view 3D Gaussian Splatting.

Reference graph

Works this paper leans on

61 extracted references · 9 canonical work pages · cited by 1 Pith paper · 1 internal anchor

  1. [1]

    Matan Atzmon, Niv Haim, Lior Yariv, Ofer Israelov, Haggai Maron, and Yaron Lipman. 2019. Controlling Neural Level Sets.Advances in Neural Information Processing Systems(2019)

  2. [2]

    Milena T Bagdasarian, Paul Knoll, Florian Barthel, Anna Hilsmann, Peter Eisert, and Wieland Morgenstern. 2024. 3DGS.ZIP: A Survey on 3D Gaussian Splatting Compression Methods.arXiv preprint arXiv:2407.09510(2024)

  3. [3]

    Jonathan T Barron, Ben Mildenhall, Matthew Tancik, Peter Hedman, Ricardo Martin-Brualla, and Pratul P Srinivasan. 2021. Mip-NeRF: A Multiscale Represen- tation for Anti-Aliasing Neural Radiance Fields. InProceedings of the IEEE/CVF International Conference on Computer Vision. 5855–5864

  4. [4]

    Jonathan T Barron, Ben Mildenhall, Dor Verbin, Pratul P Srinivasan, and Pe- ter Hedman. 2022. Mip-NeRF 360: Unbounded Anti-Aliased Neural Radiance Fields. InProceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. 5470–5479

  5. [5]

    Guikun Chen and Wenguan Wang. 2024. A Survey on 3D Gaussian Splatting. arXiv preprint arXiv:2401.03890(2024)

  6. [6]

    Kangjie Chen, Yingji Zhong, Zhihao Li, Jiaqi Lin, Youyu Chen, Minghan Qin, and Haoqian Wang. 2025. Quantifying and Alleviating Co-Adaptation in Sparse-View 3D Gaussian Splatting.arXiv preprint arXiv:2508.12720(2025)

  7. [7]

    Yihang Chen, Qianyi Wu, Weiyao Lin, Mehrtash Harandi, and Jianfei Cai. 2025. HAC: Hash-Grid Assisted Context for 3D Gaussian Splatting Compression. In European Conference on Computer Vision. Springer, 422–438

  8. [8]

    Devikalyan Das, Christopher Wewer, Raza Yunus, Eddy Ilg, and Jan Eric Lenssen

  9. [9]

    InProceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition

    Neural Parametric Gaussians for Monocular Non-Rigid Object Reconstruc- tion. InProceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. 10715–10725

  10. [10]

    Zhiwen Fan, Wenyan Cong, Kairun Wen, Kevin Wang, Jian Zhang, Xinghao Ding, Danfei Xu, Boris Ivanovic, Marco Pavone, Georgios Pavlakos, et al. 2024. InstantSplat: Unbounded Sparse-View Pose-Free Gaussian Splatting in 40 Seconds. arXiv preprint arXiv:2403.20309(2024)

  11. [11]

    Shuangkang Fang, I Shen, Takeo Igarashi, Yufeng Wang, ZeSheng Wang, Yi Yang, Wenrui Ding, Shuchang Zhou, et al. 2025. NeRF is a Valuable Assistant for 3D Gaussian Splatting. InProceedings of the IEEE/CVF International Conference on Computer Vision. 26230–26240

  12. [12]

    Shuangkang Fang, I-Chao Shen, Xuanyang Zhang, Zesheng Wang, Yufeng Wang, Wenrui Ding, Gang Yu, and Takeo Igarashi. 2026. Dropping Anchor and Spherical Harmonics for Sparse-view Gaussian Splatting.arXiv preprint arXiv:2602.20933 (2026)

  13. [13]

    Shuangkang Fang, Yufeng Wang, Yi-Hsuan Tsai, Yi Yang, Wenrui Ding, and Shuchang Zhou. 2024. Chatedit-3D: Interactive 3D Scene Editing via Text Prompts. arXiv preprint arXiv:2407.06842(2024)

  14. [14]

    Shuangkang Fang, Weixin Xu, Heng Wang, Yi Yang, Yufeng Wang, and Shuchang Zhou. 2023. One is All: Bridging the Gap Between Neural Radiance Fields Architectures with Progressive Volume Distillation. InProceedings of the AAAI Conference on Artificial Intelligence. 597–605

  15. [15]

    Efros, and Xiaolong Wang

    Yang Fu, Sifei Liu, Amey Kulkarni, Jan Kautz, Alexei A. Efros, and Xiaolong Wang. 2024. COLMAP-Free 3D Gaussian Splatting. InProceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. 20796–20805

  16. [16]

    Stephan J Garbin, Marek Kowalski, Matthew Johnson, Jamie Shotton, and Julien Valentin. 2021. FastNeRF: High-Fidelity Neural Rendering at 200FPS. InProceed- ings of the IEEE/CVF International Conference on Computer Vision. 14346–14355

  17. [17]

    Liang Han, Junsheng Zhou, Yu-Shen Liu, and Zhizhong Han. 2024. Binocular- Guided 3D Gaussian Splatting with View Consistency for Sparse View Synthesis. InAdvances in Neural Information Processing Systems, Vol. 37. 68595–68621

  18. [18]

    Kaiming He, Jian Sun, and Xiaoou Tang. 2009. Single Image Haze Removal Using Dark Channel Prior. InProceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR)

  19. [19]

    Binbin Huang, Zehao Yu, Anpei Chen, Andreas Geiger, and Shenghua Gao. 2024. 2D Gaussian Splatting for Geometrically Accurate Radiance Fields.arXiv preprint arXiv:2403.17888(2024)

  20. [20]

    Ajay Jain, Matthew Tancik, and Pieter Abbeel. 2021. Putting NeRF on a Diet: Semantically Consistent Few-Shot View Synthesis. InProceedings of the IEEE/CVF International Conference on Computer Vision. 5885–5894

  21. [21]

    Yingwenqi Jiang, Jiadong Tu, Yuan Liu, Xifeng Gao, Xiaoxiao Long, Wenping Wang, and Yuexin Ma. 2024. GaussianShader: 3D Gaussian Splatting with Shading Functions for Reflective Surfaces. InProceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. 5322–5332

  22. [22]

    Bernhard Kerbl, Georgios Kopanas, Thomas Leimkuhler, and George Drettakis

  23. [23]

    3D Gaussian Splatting for Real-Time Radiance Field Rendering.ACM Transactions on Graphics42, 4 (2023), 1–14

  24. [24]

    Byeonghyeon Lee, Howoong Lee, Usman Ali, and Eunbyung Park. 2024. Sharp- NeRF: Grid-Based Fast Deblurring Neural Radiance Fields Using Sharpness Prior. InProceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision. 3709–3718

  25. [25]

    Joo Chan Lee, Daniel Rho, Xiangyu Sun, Jong Hwan Ko, and Eunbyung Park. 2024. Compact 3D Gaussian Representation for Radiance Field. InProceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. 21719–21728

  26. [26]

    Jiahe Li, Jiawei Zhang, Xiao Bai, Jin Zheng, Xin Ning, Jun Zhou, and Lin Gu

  27. [27]

    InProceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition

    DNGaussian: Optimizing Sparse-View 3D Gaussian Radiance Fields with Global-Local Depth Normalization. InProceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. 20775–20785

  28. [28]

    David B Lindell, Julien NP Martel, and Gordon Wetzstein. 2021. AutoInt: Au- tomatic Integration for Fast Neural Volume Rendering. InProceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. 14556–14565

  29. [29]

    Xiangrui Liu, Xinju Wu, Pingping Zhang, Shiqi Wang, Zhu Li, and Sam Kwong

  30. [30]

    CompGS: Efficient 3D Scene Representation via Compressed Gaussian Splatting.arXiv preprint arXiv:2404.09458(2024)

  31. [31]

    Tao Lu, Mulin Yu, Linning Xu, Yuanbo Xiangli, Limin Wang, Dahua Lin, and Bo Dai. 2024. Scaffold-GS: Structured 3D Gaussians for View-Adaptive Rendering. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. 20654–20664

  32. [32]

    Ricardo Martin-Brualla, Noha Radwan, Mehdi SM Sajjadi, Jonathan T Barron, Alexey Dosovitskiy, and Daniel Duckworth. 2021. NeRF in the Wild: Neural Radi- ance Fields for Unconstrained Photo Collections. InProceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. 7210–7219

  33. [33]

    Hidenobu Matsuki, Riku Murai, Paul HJ Kelly, and Andrew J Davison. 2024. Gaussian Splatting SLAM. InProceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. 18039–18048

  34. [34]

    E. J. McCartney. 1976.Optics of the Atmosphere: Scattering by Molecules and Particles. Academic Press, New York, NY, USA

  35. [35]

    Mateusz Michalkiewicz, Jhony K Pontes, Dominic Jack, Mahsa Baktashmotlagh, and Anders Eriksson. 2019. Implicit Surface Representations as Layers in Neural Networks. InProceedings of the IEEE/CVF International Conference on Computer Vision. 4743–4752

  36. [36]

    Marko Mihajlovic, Sergey Prokudin, Siyu Tang, Robert Maier, Federica Bogo, Tony Tung, and Edmond Boyer. 2025. SplatFields: Neural Gaussian Splats for Sparse 3D and 4D Reconstruction. InEuropean Conference on Computer Vision. Springer, 313–332

  37. [37]

    Ben Mildenhall, Pratul P Srinivasan, Rodrigo Ortiz-Cayon, Nima Khademi Kalan- tari, Ravi Ramamoorthi, Ren Ng, and Abhishek Kar. 2019. Local Light Field Fusion: Practical View Synthesis with Prescriptive Sampling Guidelines.ACM Transactions on Graphics38, 4, 1–14

  38. [38]

    Ben Mildenhall, Pratul P Srinivasan, Matthew Tancik, Jonathan T Barron, Ravi Ramamoorthi, and Ren Ng. 2020. NeRF: Representing Scenes as Neural Radiance Fields for View Synthesis. InEuropean Conference on Computer Vision. Springer, 405–421

  39. [39]

    Thomas Muller, Alex Evans, Christoph Schied, and Alexander Keller. 2022. In- stant Neural Graphics Primitives with a Multiresolution Hash Encoding.ACM Transactions on Graphics41, 4, 102:1–102:15

  40. [40]

    Narasimhan and Shree K

    Srinivasa G. Narasimhan and Shree K. Nayar. 2003. Vision and the Atmosphere. International Journal of Computer Vision48, 3 (2003), 233–254

  41. [41]

    Michael Niemeyer, Jonathan T Barron, Ben Mildenhall, Mehdi SM Sajjadi, Andreas Geiger, and Noha Radwan. 2022. RegNeRF: Regularizing Neural Radiance Fields for View Synthesis from Sparse Inputs. InProceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. 5480–5490. Conference’17, July 2017, Washington, DC, USA Trovato et al

  42. [42]

    Michael Niemeyer, Lars Mescheder, Michael Oechsle, and Andreas Geiger. 2019. Occupancy Flow: 4D Reconstruction by Learning Particle Dynamics. InProceed- ings of the IEEE/CVF International Conference on Computer Vision. 5379–5389

  43. [43]

    Hyunwoo Park, Gun Ryu, and Wonjun Kim. 2025. DropGaussian: Structural Regularization for Sparse-View Gaussian Splatting. InProceedings of the Computer Vision and Pattern Recognition Conference. 21600–21609

  44. [44]

    Jeong Joon Park, Peter Florence, Julian Straub, Richard Newcombe, and Steven Lovegrove. 2019. DeepSDF: Learning Continuous Signed Distance Functions for Shape Representation. InProceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. 165–174

  45. [45]

    Songyou Peng, Michael Niemeyer, Lars Mescheder, Marc Pollefeys, and Andreas Geiger. 2020. Convolutional Occupancy Networks. InEuropean Conference on Computer Vision. Springer, 523–540

  46. [46]

    Christian Reiser, Songyou Peng, Yiyi Liao, and Andreas Geiger. 2021. KiloNeRF: Speeding Up Neural Radiance Fields with Thousands of Tiny MLPs. InProceedings of the IEEE/CVF International Conference on Computer Vision. 14335–14345

  47. [47]

    Taha Samavati and Mohsen Soryani. 2023. Deep Learning-Based 3D Reconstruc- tion: A Survey.Artificial Intelligence Review56, 9 (2023), 9175–9219

  48. [48]

    Meixi Song, Xin Lin, Dizhe Zhang, Haodong Li, Xiangtai Li, Bo Du, and Lu Qi. 2025. D 2GS: Depth-and-Density Guided Gaussian Splatting for Stable and Accurate Sparse-View Reconstruction.arXiv preprint arXiv:2510.08566(2025)

  49. [49]

    Nitish Srivastava, Geoffrey Hinton, Alex Krizhevsky, Ilya Sutskever, and Ruslan Salakhutdinov. 2014. Dropout: A Simple Way to Prevent Neural Networks from Overfitting.Journal of Machine Learning Research15, 1 (2014), 1929–1958

  50. [50]

    Cheng Sun, Min Sun, and Hwann-Tzong Chen. 2022. Direct Voxel Grid Optimiza- tion: Super-Fast Convergence for Radiance Fields Reconstruction. InProceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. 5459– 5469

  51. [51]

    Guangcong Wang, Zhaoxi Chen, Chen Change Loy, and Ziwei Liu. 2023. SparseN- eRF: Distilling Depth Ranking for Few-Shot Novel View Synthesis. InProceedings of the IEEE/CVF International Conference on Computer Vision. 9065–9076

  52. [52]

    Yufeng Wang, Shuangkang Fang, Huayu Zhang, Hongguang Li, Zehao Zhang, Xianlin Zeng, and Wenrui Ding. 2024. UAVNeRF: Text-Driven UAV Scene Editing with Neural Radiance Fields.IEEE Transactions on Geoscience and Remote Sensing (2024)

  53. [53]

    Qiangeng Xu, Zexiang Xu, Julien Philip, Sai Bi, Zhixin Shu, Kalyan Sunkavalli, and Ulrich Neumann. 2022. PointNeRF: Point-Based Neural Radiance Fields. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. 5438–5448

  54. [54]

    Yexing Xu, Longguang Wang, Minglin Chen, Sheng Ao, Li Li, and Yulan Guo

  55. [55]

    In Proceedings of the Computer Vision and Pattern Recognition Conference

    DropoutGS: Dropping Out Gaussians for Better Sparse-View Rendering. In Proceedings of the Computer Vision and Pattern Recognition Conference. 701–710

  56. [56]

    Jiawei Yang, Marco Pavone, and Yue Wang. 2023. FreeNeRF: Improving Few- Shot Neural Rendering with Free Frequency Regularization. InProceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. 8254–8263

  57. [57]

    Zehao Yu, Anpei Chen, Binbin Huang, Torsten Sattler, and Andreas Geiger. 2024. Mip-Splatting: Alias-Free 3D Gaussian Splatting. InProceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. 19447–19456

  58. [58]

    Jiawei Zhang, Jiahe Li, Xiaohan Yu, Lei Huang, Lin Gu, Jin Zheng, and Xiao Bai. 2024. Cor-GS: Sparse-View 3D Gaussian Splatting via Co-Regularization. In European Conference on Computer Vision. Springer, 335–352

  59. [59]

    Jian Zhang, Yuanqing Zhang, Huan Fu, Xiaowei Zhou, Bowen Cai, Jinchi Huang, Rongfei Jia, Binqiang Zhao, and Xing Tang. 2022. Ray Priors Through Repro- jection: Improving Neural Radiance Fields for Novel View Extrapolation. InPro- ceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. 18376–18386

  60. [60]

    Yulong Zheng, Zicheng Jiang, Shengfeng He, Yandu Sun, Junyu Dong, Huaidong Zhang, and Yong Du. 2025. NexusGS: Sparse View Synthesis with Epipolar Depth Priors in 3D Gaussian Splatting. InProceedings of the Computer Vision and Pattern Recognition Conference. 26800–26809

  61. [61]

    Zehao Zhu, Zhiwen Fan, Yifan Jiang, and Zhangyang Wang. 2024. FSGS: Real- Time Few-Shot View Synthesis Using Gaussian Splatting. InEuropean Conference on Computer Vision. Springer, 145–163