Recognition: 2 theorem links
· Lean TheoremDOC-GS: Dual-Domain Observation and Calibration for Reliable Sparse-View Gaussian Splatting
Pith reviewed 2026-05-10 18:59 UTC · model grok-4.3
The pith
The DOC-GS framework models Gaussian reliability through dual-domain signals to reduce artifacts in sparse-view 3D Gaussian Splatting.
A machine-rendered reading of the paper's core claim, the machinery that carries it, and where it could break.
Core claim
The central claim is that unreliable Gaussians, which are insufficiently constrained, accumulate as haze-like degradations, and can be mitigated by a unified framework using Continuous Depth-Guided Dropout to impose smooth depth-aware inductive bias in optimization and Dark Channel Prior to accumulate cross-view evidence for reliability-driven geometric pruning.
What carries the argument
The Dual-domain Observation and Calibration (DOC-GS) framework, specifically the Continuous Depth-Guided Dropout (CDGD) strategy as a proxy for primitive reliability and the Dark Channel Prior (DCP) for identifying anomalous regions.
If this is right
- Suppressing weakly constrained Gaussians improves optimization stability in sparse-view training.
- Pruning based on aggregated cross-view evidence removes floaters and reduces artifacts.
- Rendered images exhibit fewer structural distortions and translucent haze.
- Overall reconstruction quality increases without relying on purely heuristic regularization.
Where Pith is reading between the lines
- Applying similar dual-domain ideas could help in other ill-posed reconstruction tasks like neural radiance fields.
- Tracking reliability over training iterations might allow dynamic adjustment beyond fixed pruning.
- This approach suggests that artifact formation has identifiable signatures that can be used for self-correction in rendering pipelines.
Load-bearing premise
Unreliable Gaussians are the main cause of artifacts, and the dropout probability plus dark channel prior accurately identify them across different scenes.
What would settle it
A test on benchmark datasets where applying the pruning and dropout does not lead to measurable improvements in PSNR, SSIM, or visual quality compared to baseline 3DGS.
Figures
read the original abstract
Sparse-view reconstruction with 3D Gaussian Splatting (3DGS) is fundamentally ill-posed due to insufficient geometric supervision, often leading to severe overfitting and the emergence of structural distortions and translucent haze-like artifacts. While existing approaches attempt to alleviate this issue via dropout-based regularization, they are largely heuristic and lack a unified understanding of artifact formation. In this paper, we revisit sparse-view 3DGS reconstruction from a new perspective and identify the core challenge as the unobservability of Gaussian primitive reliability. Unreliable Gaussians are insufficiently constrained during optimization and accumulate as haze-like degradations in rendered images. Motivated by this observation, we propose a unified Dual-domain Observation and Calibration (DOC-GS) framework that models and corrects Gaussian reliability through the synergy of optimization-domain inductive bias and observation-domain evidence. Specifically, in the optimization domain, we characterize Gaussian reliability by the degree to which each primitive is constrained during training, and instantiate this signal via a Continuous Depth-Guided Dropout (CDGD) strategy, where the dropout probability serves as an explicit proxy for primitive reliability. This imposes a smooth depth-aware inductive bias to suppress weakly constrained Gaussians and improve optimization stability. In the observation domain, we establish a connection between floater artifacts and atmospheric scattering, and leverage the Dark Channel Prior (DCP) as a structural consistency cue to identify and accumulate anomalous regions. Based on cross-view aggregated evidence, we further design a reliability-driven geometric pruning strategy to remove low-confidence Gaussians.
Editorial analysis
A structured set of objections, weighed in public.
Referee Report
Summary. The paper proposes DOC-GS, a Dual-Domain Observation and Calibration framework for sparse-view 3D Gaussian Splatting. It identifies unreliable Gaussians (insufficiently constrained during optimization) as the source of structural distortions and haze-like artifacts. The method combines an optimization-domain inductive bias via Continuous Depth-Guided Dropout (CDGD), where dropout probability proxies primitive reliability and imposes depth-aware regularization, with an observation-domain component that uses the Dark Channel Prior (DCP) to detect anomalous regions across views and applies reliability-driven geometric pruning to remove low-confidence Gaussians.
Significance. If the dual-domain reliability modeling holds, the work could meaningfully improve reconstruction stability in under-constrained sparse-view 3DGS settings, which remain a practical bottleneck. The explicit proxy formulation and cross-view aggregation offer a more unified treatment than prior heuristic dropout methods, with potential for broader adoption in real-world capture scenarios lacking dense views.
major comments (2)
- [Method (CDGD description)] The central claim that CDGD dropout probability serves as an explicit proxy for per-primitive constraint level (optimization domain) lacks direct empirical grounding. No correlation analysis is provided between dropout rates and optimization sensitivity measures such as per-Gaussian gradient norms, Hessian traces, or ablation on view count; without this, the mapping remains an untested modeling assumption that underpins the entire regularization strategy.
- [Observation-domain calibration and pruning] The observation-domain half rests on the unverified assumption that DCP-detected dark-channel anomalies reliably coincide with floater/haze artifacts rather than other rendering issues. The manuscript does not report precision-recall of the aggregated DCP masks against manually labeled floaters or controlled experiments isolating DCP's contribution from other priors, which is load-bearing for the pruning step's validity.
minor comments (2)
- [CDGD strategy] Notation for the continuous depth-guided dropout probability schedule is introduced without an explicit equation or pseudocode; adding a compact formulation (e.g., p(d) = f(depth, iteration)) would improve reproducibility.
- [Introduction] The abstract and introduction repeatedly use 'unreliable Gaussians' without an initial formal definition; a short paragraph early in the paper defining reliability in terms of constraint degree would aid readers.
Simulated Author's Rebuttal
We thank the referee for the constructive feedback on our work. We address the two major comments point by point below, acknowledging where additional empirical support is warranted and outlining planned revisions.
read point-by-point responses
-
Referee: [Method (CDGD description)] The central claim that CDGD dropout probability serves as an explicit proxy for per-primitive constraint level (optimization domain) lacks direct empirical grounding. No correlation analysis is provided between dropout rates and optimization sensitivity measures such as per-Gaussian gradient norms, Hessian traces, or ablation on view count; without this, the mapping remains an untested modeling assumption that underpins the entire regularization strategy.
Authors: We agree that the manuscript would benefit from explicit empirical validation of the proxy relationship. The CDGD formulation is motivated by the observation that dropout probability directly encodes the degree of optimization constraint (higher probability for primitives receiving weaker supervision), but we did not include correlation studies against gradient norms, Hessian traces, or view-count ablations in the original submission. In the revision we will add a dedicated analysis subsection reporting Pearson correlations between per-Gaussian dropout rates and gradient-norm magnitudes across varying view counts, together with an ablation that varies the number of input views while tracking both dropout statistics and reconstruction metrics. revision: yes
-
Referee: [Observation-domain calibration and pruning] The observation-domain half rests on the unverified assumption that DCP-detected dark-channel anomalies reliably coincide with floater/haze artifacts rather than other rendering issues. The manuscript does not report precision-recall of the aggregated DCP masks against manually labeled floaters or controlled experiments isolating DCP's contribution from other priors, which is load-bearing for the pruning step's validity.
Authors: We acknowledge that quantitative validation of the DCP-to-artifact correspondence was not provided. While the dark-channel prior is a well-established cue for scattering-like degradations and we aggregate it across views to identify unreliable regions, the original manuscript lacks precision-recall figures against labeled floaters and isolated ablations of the DCP term. We will revise the observation-domain section to include (i) precision-recall evaluation of the aggregated DCP masks on a manually annotated subset of scenes and (ii) a controlled ablation that disables the DCP-driven pruning while retaining the optimization-domain component, thereby isolating its contribution. revision: yes
Circularity Check
No circularity: new inductive biases and external priors introduced without reduction to inputs
full rationale
The derivation introduces CDGD as a new strategy that defines dropout probability as a proxy for constraint level by construction of the method itself, not by fitting a parameter to data and then relabeling it a prediction. DCP is invoked as an established external prior for anomaly detection, with no self-citation chains or uniqueness theorems from the authors' prior work serving as load-bearing justification. No equations or claims reduce a target quantity to a fitted input or self-defined signal; the framework adds independent mechanisms for reliability modeling and pruning. This is the common case of a self-contained proposal.
Axiom & Free-Parameter Ledger
axioms (2)
- domain assumption Dropout probability in a depth-guided strategy serves as an explicit proxy for the degree to which each Gaussian primitive is constrained during optimization.
- domain assumption Floater artifacts can be connected to atmospheric scattering and identified using the Dark Channel Prior as a structural consistency cue across views.
Lean theorems connected to this paper
-
IndisputableMonolith/Cost/FunctionalEquation.leanwashburn_uniqueness_aczel unclear?
unclearRelation between the paper passage and the cited Recognition theorem.
the dropout probability serves as an explicit proxy for primitive reliability... Continuous Depth-Guided Dropout (CDGD) strategy... Dark Channel Prior (DCP) as a structural consistency cue
-
IndisputableMonolith/Foundation/RealityFromDistinction.leanreality_from_one_distinction unclear?
unclearRelation between the paper passage and the cited Recognition theorem.
reformulate sparse-view 3DGS reconstruction as a dual-domain reliability inference problem
What do these tags mean?
- matches
- The paper's claim is directly supported by a theorem in the formal canon.
- supports
- The theorem supports part of the paper's argument, but the paper may add assumptions or extra steps.
- extends
- The paper goes beyond the formal theorem; the theorem is a base layer rather than the whole result.
- uses
- The paper appears to rely on the theorem as machinery.
- contradicts
- The paper's claim conflicts with a theorem or certificate in the canon.
- unclear
- Pith found a possible connection, but the passage is too broad, indirect, or ambiguous to say the theorem truly supports the claim.
Forward citations
Cited by 2 Pith papers
-
PairDropGS: Paired Dropout-Induced Consistency Regularization for Sparse-View Gaussian Splatting
PairDropGS applies paired dropout-induced low-frequency consistency regularization and progressive scheduling to improve stability and quality in sparse-view 3D Gaussian Splatting over prior dropout methods.
-
PairDropGS: Paired Dropout-Induced Consistency Regularization for Sparse-View Gaussian Splatting
PairDropGS uses paired dropout with low-frequency consistency regularization and progressive scheduling to stabilize and improve sparse-view 3D Gaussian Splatting.
Reference graph
Works this paper leans on
-
[1]
Matan Atzmon, Niv Haim, Lior Yariv, Ofer Israelov, Haggai Maron, and Yaron Lipman. 2019. Controlling Neural Level Sets.Advances in Neural Information Processing Systems(2019)
2019
- [2]
-
[3]
Jonathan T Barron, Ben Mildenhall, Matthew Tancik, Peter Hedman, Ricardo Martin-Brualla, and Pratul P Srinivasan. 2021. Mip-NeRF: A Multiscale Represen- tation for Anti-Aliasing Neural Radiance Fields. InProceedings of the IEEE/CVF International Conference on Computer Vision. 5855–5864
2021
-
[4]
Jonathan T Barron, Ben Mildenhall, Dor Verbin, Pratul P Srinivasan, and Pe- ter Hedman. 2022. Mip-NeRF 360: Unbounded Anti-Aliased Neural Radiance Fields. InProceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. 5470–5479
2022
-
[5]
Guikun Chen and Wenguan Wang. 2024. A Survey on 3D Gaussian Splatting. arXiv preprint arXiv:2401.03890(2024)
work page internal anchor Pith review Pith/arXiv arXiv 2024
- [6]
-
[7]
Yihang Chen, Qianyi Wu, Weiyao Lin, Mehrtash Harandi, and Jianfei Cai. 2025. HAC: Hash-Grid Assisted Context for 3D Gaussian Splatting Compression. In European Conference on Computer Vision. Springer, 422–438
2025
-
[8]
Devikalyan Das, Christopher Wewer, Raza Yunus, Eddy Ilg, and Jan Eric Lenssen
-
[9]
InProceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition
Neural Parametric Gaussians for Monocular Non-Rigid Object Reconstruc- tion. InProceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. 10715–10725
- [10]
-
[11]
Shuangkang Fang, I Shen, Takeo Igarashi, Yufeng Wang, ZeSheng Wang, Yi Yang, Wenrui Ding, Shuchang Zhou, et al. 2025. NeRF is a Valuable Assistant for 3D Gaussian Splatting. InProceedings of the IEEE/CVF International Conference on Computer Vision. 26230–26240
2025
- [12]
- [13]
-
[14]
Shuangkang Fang, Weixin Xu, Heng Wang, Yi Yang, Yufeng Wang, and Shuchang Zhou. 2023. One is All: Bridging the Gap Between Neural Radiance Fields Architectures with Progressive Volume Distillation. InProceedings of the AAAI Conference on Artificial Intelligence. 597–605
2023
-
[15]
Efros, and Xiaolong Wang
Yang Fu, Sifei Liu, Amey Kulkarni, Jan Kautz, Alexei A. Efros, and Xiaolong Wang. 2024. COLMAP-Free 3D Gaussian Splatting. InProceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. 20796–20805
2024
-
[16]
Stephan J Garbin, Marek Kowalski, Matthew Johnson, Jamie Shotton, and Julien Valentin. 2021. FastNeRF: High-Fidelity Neural Rendering at 200FPS. InProceed- ings of the IEEE/CVF International Conference on Computer Vision. 14346–14355
2021
-
[17]
Liang Han, Junsheng Zhou, Yu-Shen Liu, and Zhizhong Han. 2024. Binocular- Guided 3D Gaussian Splatting with View Consistency for Sparse View Synthesis. InAdvances in Neural Information Processing Systems, Vol. 37. 68595–68621
2024
-
[18]
Kaiming He, Jian Sun, and Xiaoou Tang. 2009. Single Image Haze Removal Using Dark Channel Prior. InProceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR)
2009
- [19]
-
[20]
Ajay Jain, Matthew Tancik, and Pieter Abbeel. 2021. Putting NeRF on a Diet: Semantically Consistent Few-Shot View Synthesis. InProceedings of the IEEE/CVF International Conference on Computer Vision. 5885–5894
2021
-
[21]
Yingwenqi Jiang, Jiadong Tu, Yuan Liu, Xifeng Gao, Xiaoxiao Long, Wenping Wang, and Yuexin Ma. 2024. GaussianShader: 3D Gaussian Splatting with Shading Functions for Reflective Surfaces. InProceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. 5322–5332
2024
-
[22]
Bernhard Kerbl, Georgios Kopanas, Thomas Leimkuhler, and George Drettakis
-
[23]
3D Gaussian Splatting for Real-Time Radiance Field Rendering.ACM Transactions on Graphics42, 4 (2023), 1–14
2023
-
[24]
Byeonghyeon Lee, Howoong Lee, Usman Ali, and Eunbyung Park. 2024. Sharp- NeRF: Grid-Based Fast Deblurring Neural Radiance Fields Using Sharpness Prior. InProceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision. 3709–3718
2024
-
[25]
Joo Chan Lee, Daniel Rho, Xiangyu Sun, Jong Hwan Ko, and Eunbyung Park. 2024. Compact 3D Gaussian Representation for Radiance Field. InProceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. 21719–21728
2024
-
[26]
Jiahe Li, Jiawei Zhang, Xiao Bai, Jin Zheng, Xin Ning, Jun Zhou, and Lin Gu
-
[27]
InProceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition
DNGaussian: Optimizing Sparse-View 3D Gaussian Radiance Fields with Global-Local Depth Normalization. InProceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. 20775–20785
-
[28]
David B Lindell, Julien NP Martel, and Gordon Wetzstein. 2021. AutoInt: Au- tomatic Integration for Fast Neural Volume Rendering. InProceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. 14556–14565
2021
-
[29]
Xiangrui Liu, Xinju Wu, Pingping Zhang, Shiqi Wang, Zhu Li, and Sam Kwong
- [30]
-
[31]
Tao Lu, Mulin Yu, Linning Xu, Yuanbo Xiangli, Limin Wang, Dahua Lin, and Bo Dai. 2024. Scaffold-GS: Structured 3D Gaussians for View-Adaptive Rendering. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. 20654–20664
2024
-
[32]
Ricardo Martin-Brualla, Noha Radwan, Mehdi SM Sajjadi, Jonathan T Barron, Alexey Dosovitskiy, and Daniel Duckworth. 2021. NeRF in the Wild: Neural Radi- ance Fields for Unconstrained Photo Collections. InProceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. 7210–7219
2021
-
[33]
Hidenobu Matsuki, Riku Murai, Paul HJ Kelly, and Andrew J Davison. 2024. Gaussian Splatting SLAM. InProceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. 18039–18048
2024
-
[34]
E. J. McCartney. 1976.Optics of the Atmosphere: Scattering by Molecules and Particles. Academic Press, New York, NY, USA
1976
-
[35]
Mateusz Michalkiewicz, Jhony K Pontes, Dominic Jack, Mahsa Baktashmotlagh, and Anders Eriksson. 2019. Implicit Surface Representations as Layers in Neural Networks. InProceedings of the IEEE/CVF International Conference on Computer Vision. 4743–4752
2019
-
[36]
Marko Mihajlovic, Sergey Prokudin, Siyu Tang, Robert Maier, Federica Bogo, Tony Tung, and Edmond Boyer. 2025. SplatFields: Neural Gaussian Splats for Sparse 3D and 4D Reconstruction. InEuropean Conference on Computer Vision. Springer, 313–332
2025
-
[37]
Ben Mildenhall, Pratul P Srinivasan, Rodrigo Ortiz-Cayon, Nima Khademi Kalan- tari, Ravi Ramamoorthi, Ren Ng, and Abhishek Kar. 2019. Local Light Field Fusion: Practical View Synthesis with Prescriptive Sampling Guidelines.ACM Transactions on Graphics38, 4, 1–14
2019
-
[38]
Ben Mildenhall, Pratul P Srinivasan, Matthew Tancik, Jonathan T Barron, Ravi Ramamoorthi, and Ren Ng. 2020. NeRF: Representing Scenes as Neural Radiance Fields for View Synthesis. InEuropean Conference on Computer Vision. Springer, 405–421
2020
-
[39]
Thomas Muller, Alex Evans, Christoph Schied, and Alexander Keller. 2022. In- stant Neural Graphics Primitives with a Multiresolution Hash Encoding.ACM Transactions on Graphics41, 4, 102:1–102:15
2022
-
[40]
Narasimhan and Shree K
Srinivasa G. Narasimhan and Shree K. Nayar. 2003. Vision and the Atmosphere. International Journal of Computer Vision48, 3 (2003), 233–254
2003
-
[41]
Michael Niemeyer, Jonathan T Barron, Ben Mildenhall, Mehdi SM Sajjadi, Andreas Geiger, and Noha Radwan. 2022. RegNeRF: Regularizing Neural Radiance Fields for View Synthesis from Sparse Inputs. InProceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. 5480–5490. Conference’17, July 2017, Washington, DC, USA Trovato et al
2022
-
[42]
Michael Niemeyer, Lars Mescheder, Michael Oechsle, and Andreas Geiger. 2019. Occupancy Flow: 4D Reconstruction by Learning Particle Dynamics. InProceed- ings of the IEEE/CVF International Conference on Computer Vision. 5379–5389
2019
-
[43]
Hyunwoo Park, Gun Ryu, and Wonjun Kim. 2025. DropGaussian: Structural Regularization for Sparse-View Gaussian Splatting. InProceedings of the Computer Vision and Pattern Recognition Conference. 21600–21609
2025
-
[44]
Jeong Joon Park, Peter Florence, Julian Straub, Richard Newcombe, and Steven Lovegrove. 2019. DeepSDF: Learning Continuous Signed Distance Functions for Shape Representation. InProceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. 165–174
2019
-
[45]
Songyou Peng, Michael Niemeyer, Lars Mescheder, Marc Pollefeys, and Andreas Geiger. 2020. Convolutional Occupancy Networks. InEuropean Conference on Computer Vision. Springer, 523–540
2020
-
[46]
Christian Reiser, Songyou Peng, Yiyi Liao, and Andreas Geiger. 2021. KiloNeRF: Speeding Up Neural Radiance Fields with Thousands of Tiny MLPs. InProceedings of the IEEE/CVF International Conference on Computer Vision. 14335–14345
2021
-
[47]
Taha Samavati and Mohsen Soryani. 2023. Deep Learning-Based 3D Reconstruc- tion: A Survey.Artificial Intelligence Review56, 9 (2023), 9175–9219
2023
- [48]
-
[49]
Nitish Srivastava, Geoffrey Hinton, Alex Krizhevsky, Ilya Sutskever, and Ruslan Salakhutdinov. 2014. Dropout: A Simple Way to Prevent Neural Networks from Overfitting.Journal of Machine Learning Research15, 1 (2014), 1929–1958
2014
-
[50]
Cheng Sun, Min Sun, and Hwann-Tzong Chen. 2022. Direct Voxel Grid Optimiza- tion: Super-Fast Convergence for Radiance Fields Reconstruction. InProceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. 5459– 5469
2022
-
[51]
Guangcong Wang, Zhaoxi Chen, Chen Change Loy, and Ziwei Liu. 2023. SparseN- eRF: Distilling Depth Ranking for Few-Shot Novel View Synthesis. InProceedings of the IEEE/CVF International Conference on Computer Vision. 9065–9076
2023
-
[52]
Yufeng Wang, Shuangkang Fang, Huayu Zhang, Hongguang Li, Zehao Zhang, Xianlin Zeng, and Wenrui Ding. 2024. UAVNeRF: Text-Driven UAV Scene Editing with Neural Radiance Fields.IEEE Transactions on Geoscience and Remote Sensing (2024)
2024
-
[53]
Qiangeng Xu, Zexiang Xu, Julien Philip, Sai Bi, Zhixin Shu, Kalyan Sunkavalli, and Ulrich Neumann. 2022. PointNeRF: Point-Based Neural Radiance Fields. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. 5438–5448
2022
-
[54]
Yexing Xu, Longguang Wang, Minglin Chen, Sheng Ao, Li Li, and Yulan Guo
-
[55]
In Proceedings of the Computer Vision and Pattern Recognition Conference
DropoutGS: Dropping Out Gaussians for Better Sparse-View Rendering. In Proceedings of the Computer Vision and Pattern Recognition Conference. 701–710
-
[56]
Jiawei Yang, Marco Pavone, and Yue Wang. 2023. FreeNeRF: Improving Few- Shot Neural Rendering with Free Frequency Regularization. InProceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. 8254–8263
2023
-
[57]
Zehao Yu, Anpei Chen, Binbin Huang, Torsten Sattler, and Andreas Geiger. 2024. Mip-Splatting: Alias-Free 3D Gaussian Splatting. InProceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. 19447–19456
2024
-
[58]
Jiawei Zhang, Jiahe Li, Xiaohan Yu, Lei Huang, Lin Gu, Jin Zheng, and Xiao Bai. 2024. Cor-GS: Sparse-View 3D Gaussian Splatting via Co-Regularization. In European Conference on Computer Vision. Springer, 335–352
2024
-
[59]
Jian Zhang, Yuanqing Zhang, Huan Fu, Xiaowei Zhou, Bowen Cai, Jinchi Huang, Rongfei Jia, Binqiang Zhao, and Xing Tang. 2022. Ray Priors Through Repro- jection: Improving Neural Radiance Fields for Novel View Extrapolation. InPro- ceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. 18376–18386
2022
-
[60]
Yulong Zheng, Zicheng Jiang, Shengfeng He, Yandu Sun, Junyu Dong, Huaidong Zhang, and Yong Du. 2025. NexusGS: Sparse View Synthesis with Epipolar Depth Priors in 3D Gaussian Splatting. InProceedings of the Computer Vision and Pattern Recognition Conference. 26800–26809
2025
-
[61]
Zehao Zhu, Zhiwen Fan, Yifan Jiang, and Zhangyang Wang. 2024. FSGS: Real- Time Few-Shot View Synthesis Using Gaussian Splatting. InEuropean Conference on Computer Vision. Springer, 145–163
2024
discussion (0)
Sign in with ORCID, Apple, or X to comment. Anyone can read and Pith papers without signing in.