pith. machine review for the scientific record. sign in

arxiv: 2605.12072 · v2 · submitted 2026-05-12 · 💻 cs.CV

Recognition: no theorem link

PairDropGS: Paired Dropout-Induced Consistency Regularization for Sparse-View Gaussian Splatting

Authors on Pith no claims yet

Pith reviewed 2026-05-14 20:34 UTC · model grok-4.3

classification 💻 cs.CV
keywords 3D Gaussian Splattingsparse-view reconstructionconsistency regularizationdropoutneural renderingcomputer visionimage synthesis
0
0 comments X

The pith

PairDropGS enforces low-frequency consistency between paired dropout versions of a Gaussian field to stabilize sparse-view 3DGS training.

A machine-rendered reading of the paper's core claim, the machinery that carries it, and where it could break.

The paper aims to show that existing dropout methods for sparse-view 3D Gaussian Splatting suffer from inconsistencies across different dropped subsets, which destabilize reconstruction and hurt representation quality. PairDropGS addresses this by sampling two dropped subsets from the same underlying Gaussian field and applying a regularization term that forces their low-frequency rendered outputs to agree. This preserves stable coarse geometry and scene layout while leaving high-frequency details relatively free. A progressive schedule ramps up the strength of the consistency constraint over training epochs for better convergence. If correct, the result is higher-quality novel-view synthesis from few input images using a method that can be added to existing dropout pipelines.

Core claim

PairDropGS revisits dropout-based sparse-view 3DGS from a consistency regularization perspective and proposes constructing pairs of dropped Gaussian subsets from a shared field, then constraining their low-frequency rendered structures via a dedicated loss together with progressive scheduling of the regularization strength.

What carries the argument

Paired dropout-induced low-frequency consistency loss applied to rendered images from two differently suppressed versions of the same Gaussian set.

If this is right

  • The shared Gaussian field maintains consistent coarse geometry across random dropout realizations.
  • High-frequency details remain less constrained so that fine scene content can still be captured.
  • Training convergence becomes more stable because inconsistencies among dropped subsets are reduced.
  • The approach integrates directly as an addition to prior dropout-based 3DGS training routines.
  • Reconstruction quality on sparse-view data exceeds results from earlier dropout-only methods.

Where Pith is reading between the lines

These are editorial extensions of the paper, not claims the author makes directly.

  • The frequency-separated consistency idea may transfer to other neural rendering models that use random masking or dropout.
  • Similar paired consistency could reduce the need for increasingly elaborate dropout schedules in future sparse reconstruction work.
  • Testing the same pairing strategy on real-world captures with even fewer views or on dynamic scenes would check robustness beyond the paper's benchmarks.

Load-bearing premise

That low-frequency agreement between different random dropout realizations of the same Gaussian field will improve overall representation learning without suppressing necessary scene details or adding new instabilities.

What would settle it

Experiments on standard sparse-view benchmarks such as LLFF or DTU that show no gain or a drop in PSNR and SSIM when the paired consistency term is added would falsify the claim.

Figures

Figures reproduced from arXiv: 2605.12072 by Debin Zhao, Hantang Li, Qiang Zhu, Xiandong Meng, Xiaopeng Fan, Xingtao Wang.

Figure 1
Figure 1. Figure 1: Comparison of existing dropout-based methods and PairDropGS. (a) Existing dropout-based methods mainly design sophisticated dropout strategies for improving sparse￾view reconstruction. (b) In contrast, PairDropGS constructs a pair of dropped Gaussian subsets from a shared Gaussian field and explicitly regularizes consistency of their rendered structures for stable and robust sparse-view reconstruction. [10… view at source ↗
Figure 2
Figure 2. Figure 2: Stability and performance comparisons of existing methods and PairDropGS. (a) PSNR variation across different training rounds on the LLFF 3-view horns 00003 scene. Existing dropout-based methods show large performance fluctuations, while PairDropGS achieves more stable and higher reconstruction quality. (b) Overall PSNR comparison. PairDropGS significantly and consistently outperforms existing methods unde… view at source ↗
Figure 3
Figure 3. Figure 3: Overview of PairDropGS. Starting from a shared Gaussian field, PairDropGS constructs two independently dropped Gaussian subsets under the same training view during training, forming paired branches. Both branches are supervised by the same ground-truth observation through two-branch reconstruction loss. To stabilize the structural responses rendered from different dropped subsets, the rendered images are p… view at source ↗
Figure 4
Figure 4. Figure 4: Qualitative comparison of two sparse-view methods with our methods on the LLFF dataset under 3-view settings. [PITH_FULL_IMAGE:figures/full_fig_p007_4.png] view at source ↗
Figure 5
Figure 5. Figure 5: Qualitative comparison of two sparse-view methods with our methods on the MipNeRF-360 dataset under 12-view [PITH_FULL_IMAGE:figures/full_fig_p007_5.png] view at source ↗
Figure 6
Figure 6. Figure 6: Qualitative comparison of two sparse-view methods with our methods on the Blender dataset under 8-view settings. [PITH_FULL_IMAGE:figures/full_fig_p008_6.png] view at source ↗
Figure 7
Figure 7. Figure 7: Per-scene PSNR fluctuation of three datasets. [PITH_FULL_IMAGE:figures/full_fig_p009_7.png] view at source ↗
Figure 8
Figure 8. Figure 8: Plug-and-play compatibility on the LLFF dataset under 3-view settings. PairDropGS can be easily integrated into [PITH_FULL_IMAGE:figures/full_fig_p010_8.png] view at source ↗
Figure 9
Figure 9. Figure 9: Training PSNR curves of different ablation variants [PITH_FULL_IMAGE:figures/full_fig_p010_9.png] view at source ↗
read the original abstract

Dropout-based sparse-view 3D Gaussian Splatting (3DGS) methods alleviate overfitting by randomly suppressing Gaussian primitives during training. Existing methods mainly focus on designing increasingly sophisticated dropout strategies, while they overlook the resulting inconsistencies among different dropped Gaussian subsets. This oversight often leads to unstable reconstruction and suboptimal Gaussian representation learning.In this paper, we revisit dropout-based sparse-view 3DGS from a consistency regularization perspective and propose PairDropGS, a Paired Dropout-induced Consistency Regularization framework for sparse-view Gaussian splatting. Specifically, PairDropGS first constructs a pair of the dropped Gaussian subsets from a shared Gaussian field and designs a low-frequency consistency regularization to constrain their low-frequency rendered structures. This design encourages the shared Gaussian field to preserve stable scene layout and coarse geometry under different random dropouts, while avoiding excessive constraints on ambiguous high-frequency details. Moreover, we introduce a progressive consistency scheduling strategy to gradually strengthen the consistency regularization during training for stability and robustness of reconstruction. Extensive experiments on widely-used sparse-view benchmarks demonstrate that PairDropGS achieves superior training stability, significantly outperforms existing dropout-based 3DGS methods in reconstruction quality, while exhibiting the simplicity and plug-and-play nature for improving dropout-based optimization.

Editorial analysis

A structured set of objections, weighed in public.

Desk editor's note, referee report, simulated authors' rebuttal, and a circularity audit. Tearing a paper down is the easy half of reading it; the pith above is the substance, this is the friction.

Referee Report

1 major / 0 minor

Summary. The paper proposes PairDropGS, a Paired Dropout-induced Consistency Regularization framework for sparse-view 3D Gaussian Splatting. It constructs pairs of dropped Gaussian subsets from a shared field, applies a low-frequency consistency regularization to enforce stable coarse geometry and scene layout across random dropouts, and introduces progressive consistency scheduling to gradually increase regularization strength. The method is presented as a simple, plug-and-play addition to existing dropout-based 3DGS approaches, with experiments on standard sparse-view benchmarks claiming superior training stability and reconstruction quality.

Significance. If the empirical claims hold, the work provides a lightweight consistency-based regularization that addresses overlooked inconsistencies in dropout-based 3DGS without requiring elaborate dropout scheduling. The emphasis on preserving high-frequency details while stabilizing low-frequency structure, combined with the progressive schedule, could offer a practical improvement for sparse-view reconstruction pipelines that already use dropout.

major comments (1)
  1. [Abstract] Abstract: The central design claim that the low-frequency consistency regularization 'avoids excessive constraints on ambiguous high-frequency details' is load-bearing for the method's advantage over prior dropout approaches, yet the abstract (and by extension the described framework) provides no explicit mechanism for frequency separation such as a cutoff, filter kernel, or Fourier-domain weighting. In sparse-view settings where high-frequency content is already weak, any leakage from the consistency term could produce the over-smoothing the paper claims to prevent; an ablation isolating frequency bands or quantitative high-frequency energy comparison to baselines is required to substantiate this separation.

Simulated Author's Rebuttal

1 responses · 0 unresolved

Thank you for the opportunity to respond to the referee's report. We appreciate the constructive feedback and address the major comment below. We are prepared to revise the manuscript to strengthen the substantiation of our claims.

read point-by-point responses
  1. Referee: [Abstract] Abstract: The central design claim that the low-frequency consistency regularization 'avoids excessive constraints on ambiguous high-frequency details' is load-bearing for the method's advantage over prior dropout approaches, yet the abstract (and by extension the described framework) provides no explicit mechanism for frequency separation such as a cutoff, filter kernel, or Fourier-domain weighting. In sparse-view settings where high-frequency content is already weak, any leakage from the consistency term could produce the over-smoothing the paper claims to prevent; an ablation isolating frequency bands or quantitative high-frequency energy comparison to baselines is required to substantiate this separation.

    Authors: We thank the referee for this important observation. The low-frequency consistency regularization is implemented by applying the consistency loss to rendered structures after low-pass filtering (via a fixed Gaussian kernel on the output images, as described in Section 3.2), which targets coarse geometry and scene layout while leaving high-frequency discrepancies unpenalized. This design choice provides the implicit separation. We acknowledge, however, that the abstract and main text do not sufficiently detail the filtering mechanism or provide direct empirical validation against over-smoothing. In the revised version we will (1) expand the abstract and method section to explicitly state the low-pass filter parameters and frequency separation rationale, (2) add an ablation that isolates frequency bands (e.g., high-pass filtered PSNR and energy metrics), and (3) include quantitative high-frequency energy comparisons to the dropout baselines. These additions will directly substantiate the claim without altering the reported performance gains. revision: yes

Circularity Check

0 steps flagged

No circularity in derivation chain

full rationale

The paper introduces PairDropGS as a new regularization framework consisting of paired dropout subsets and a low-frequency consistency loss on rendered structures, plus a progressive scheduling strategy. No equations, derivations, or self-citations are shown that reduce the claimed stability or quality gains to a fitted parameter, self-definition, or prior result by the same authors. The central contribution is an empirical training constraint whose effectiveness is validated externally on sparse-view benchmarks rather than by construction from its inputs.

Axiom & Free-Parameter Ledger

0 free parameters · 0 axioms · 0 invented entities

The approach rests on standard assumptions from 3D Gaussian Splatting and consistency regularization literature; no new free parameters, axioms, or invented entities are explicitly introduced in the abstract.

pith-pipeline@v0.9.0 · 5530 in / 1035 out tokens · 38345 ms · 2026-05-14T20:34:54.492962+00:00 · methodology

discussion (0)

Sign in with ORCID, Apple, or X to comment. Anyone can read and Pith papers without signing in.

Reference graph

Works this paper leans on

41 extracted references · 8 canonical work pages · 2 internal anchors

  1. [1]

    3d gaussian splatting for real-time radiance field rendering,

    B. Kerbl, G. Kopanas, T. Leimk ¨uhler, and G. Drettakis, “3d gaussian splatting for real-time radiance field rendering,”ACM Trans. Graph., vol. 42, no. 4, pp. 1–14, 2023

  2. [2]

    A Survey on 3D Gaussian Splatting

    G. Chen and W. Wang, “A survey on 3d gaussian splatting,” arXiv:2401.03890, 2024

  3. [3]

    3d gaussian splatting as new era: A survey,

    B. Fei, J. Xu, R. Zhang, Q. Zhou, W. Yang, and Y . He, “3d gaussian splatting as new era: A survey,”IEEE Trans. Vis. Comput. Graph., 2024

  4. [5]

    Gaus- sianshader: 3d gaussian splatting with shading functions for reflective surfaces,

    Y . Jiang, J. Tu, Y . Liu, X. Gao, X. Long, W. Wang, and Y . Ma, “Gaus- sianshader: 3d gaussian splatting with shading functions for reflective surfaces,” inProc. IEEE/CVF Conf. Comput. Vis. Pattern Recognit., 2024, pp. 5322–5332

  5. [6]

    Mip-nerf: A multiscale representation for anti- aliasing neural radiance fields,

    J. T. Barron, B. Mildenhall, M. Tancik, P. Hedman, R. Martin-Brualla, and P. P. Srinivasan, “Mip-nerf: A multiscale representation for anti- aliasing neural radiance fields,” inProc. IEEE/CVF Int. Conf. Comput. Vis., 2021, pp. 5855–5864

  6. [7]

    Mip-nerf 360: Unbounded anti-aliased neural radiance fields,

    J. T. Barron, B. Mildenhall, D. Verbin, P. P. Srinivasan, and P. Hedman, “Mip-nerf 360: Unbounded anti-aliased neural radiance fields,” inProc. IEEE/CVF Conf. Comput. Vis. Pattern Recognit., 2022, pp. 5470–5479

  7. [8]

    Regnerf: Regularizing neural radiance fields for view synthesis from sparse inputs,

    M. Niemeyer, J. T. Barron, B. Mildenhall, M. S. M. Sajjadi, A. Geiger, and N. Radwan, “Regnerf: Regularizing neural radiance fields for view synthesis from sparse inputs,” inProc. IEEE/CVF Conf. Comput. Vis. Pattern Recognit., 2022, pp. 5480–5490

  8. [9]

    Instant neural graphics primitives with a multiresolution hash encoding,

    T. M ¨uller, A. Evans, C. Schied, and A. Keller, “Instant neural graphics primitives with a multiresolution hash encoding,”ACM Trans. Graph., vol. 41, no. 4, pp. 102:1–102:15, 2022

  9. [10]

    Freenerf: Improving few-shot neural rendering with free frequency regularization,

    J. Yang, M. Pavone, and Y . Wang, “Freenerf: Improving few-shot neural rendering with free frequency regularization,” inProc. IEEE/CVF Conf. Comput. Vis. Pattern Recognit., 2023, pp. 8254–8263

  10. [11]

    Sparsenerf: Distilling depth ranking for few-shot novel view synthesis,

    G. Wang, Z. Chen, C. C. Loy, and Z. Liu, “Sparsenerf: Distilling depth ranking for few-shot novel view synthesis,” inProc. IEEE/CVF Int. Conf. Comput. Vis., 2023, pp. 9065–9076

  11. [12]

    Dngaussian: Optimizing sparse-view 3d gaussian radiance fields with global-local depth normalization,

    J. Li, J. Zhang, X. Bai, J. Zheng, X. Ning, J. Zhou, and L. Gu, “Dngaussian: Optimizing sparse-view 3d gaussian radiance fields with global-local depth normalization,” inProc. IEEE/CVF Conf. Comput. Vis. Pattern Recognit., 2024, pp. 20 775–20 785

  12. [13]

    Fsgs: Real-time few-shot view synthesis using gaussian splatting,

    Z. Zhu, Z. Fan, Y . Jiang, and Z. Wang, “Fsgs: Real-time few-shot view synthesis using gaussian splatting,” inEur . Conf. Comput. Vis., 2024, pp. 145–163. 12

  13. [14]

    Cor-gs: Sparse-view 3d gaussian splatting via co-regularization,

    J. Zhang, J. Li, X. Yu, L. Huang, L. Gu, J. Zheng, and X. Bai, “Cor-gs: Sparse-view 3d gaussian splatting via co-regularization,” inEur . Conf. Comput. Vis., 2024, pp. 335–352

  14. [15]

    Nexusgs: Sparse view synthesis with epipolar depth priors in 3d gaussian splatting,

    Y . Zheng, Z. Jiang, S. He, Y . Sun, J. Dong, H. Zhang, and Y . Du, “Nexusgs: Sparse view synthesis with epipolar depth priors in 3d gaussian splatting,” inProc. IEEE/CVF Conf. Comput. Vis. Pattern Recognit., 2025, pp. 26 800–26 809

  15. [16]

    Dwtgs: Rethinking frequency regularization for sparse-view 3d gaussian splatting,

    H. Nguyen, R. Li, A. Le, and T. Nguyen, “Dwtgs: Rethinking frequency regularization for sparse-view 3d gaussian splatting,” arXiv:2507.15690, 2025

  16. [17]

    Self-ensembling gaussian splatting for few-shot novel view synthesis,

    C. Zhao, X. Wang, T. Zhang, and et al., “Self-ensembling gaussian splatting for few-shot novel view synthesis,” inProc. IEEE/CVF Int. Conf. Comput. Vis., 2025, pp. 4940–4950

  17. [18]

    Dropoutgs: Dropping out gaussians for better sparse-view rendering,

    Y . Xu, L. Wang, M. Chen, S. Ao, L. Li, and Y . Guo, “Dropoutgs: Dropping out gaussians for better sparse-view rendering,” inProc. IEEE/CVF Conf. Comput. Vis. Pattern Recognit., 2025, pp. 701–710

  18. [19]

    Dropgaussian: Structural regularization for sparse-view gaussian splatting,

    H. Park, G. Ryu, and W. Kim, “Dropgaussian: Structural regularization for sparse-view gaussian splatting,” inProc. IEEE/CVF Conf. Comput. Vis. Pattern Recognit., 2025, pp. 21 600–21 609

  19. [20]

    d 2gs: Depth- and-density guided gaussian splatting for stable and accurate sparse-view reconstruction,

    M. Song, X. Lin, D. Zhang, H. Li, X. Li, B. Du, and L. Qi, “d 2gs: Depth- and-density guided gaussian splatting for stable and accurate sparse-view reconstruction,” arXiv:2510.08566, 2025

  20. [21]

    DOC-GS: Dual-Domain Observation and Calibration for Reliable Sparse-View Gaussian Splatting

    H. Li, Q. Zhu, X. Meng, D. Zhao, and X. Fan, “Doc-gs: Dual-domain observation and calibration for reliable sparse-view gaussian splatting,” arXiv:2604.06739, 2026

  21. [22]

    Dropping anchor and spherical harmonics for sparse-view gaussian splatting,

    S. Fang, I.-C. Shen, X. Zhang, Z. Wang, Y . Wang, W. Ding, G. Yu, and T. Igarashi, “Dropping anchor and spherical harmonics for sparse-view gaussian splatting,” arXiv:2602.20933, 2026

  22. [23]

    Ugod: Uncertainty-guided differen- tiable opacity and soft dropout for enhanced sparse-view 3dgs,

    Z. Guo, P. Wang, Z. Chenet al., “Ugod: Uncertainty-guided differen- tiable opacity and soft dropout for enhanced sparse-view 3dgs,”arXiv preprint arXiv:2508.04968, 2025

  23. [24]

    Quantifying and alleviating co-adaptation in sparse-view 3d gaussian splatting,

    K. Chen, Y . Zhong, Z. Li, J. Lin, Y . Chen, M. Qin, and H. Wang, “Quantifying and alleviating co-adaptation in sparse-view 3d gaussian splatting,” arXiv:2508.12720, 2025

  24. [25]

    Nerf: Representing scenes as neural radiance fields for view synthesis,

    B. Mildenhall, P. P. Srinivasan, M. Tancik, J. T. Barron, R. Ramamoorthi, and R. Ng, “Nerf: Representing scenes as neural radiance fields for view synthesis,” inEur . Conf. Comput. Vis., 2020, pp. 405–421

  25. [26]

    Cbarf: Cascaded bundle- adjusting neural radiance fields from imperfect camera poses,

    H. Fu, X. Yu, L. Li, Y . Zhang, and J. Wang, “Cbarf: Cascaded bundle- adjusting neural radiance fields from imperfect camera poses,”IEEE Trans. Multimedia, vol. 26, pp. 9304–9315, 2024

  26. [27]

    Atm-nerf: Ac- celerating training for nerf rendering on mobile devices via geometric regularization,

    Y . Chen, L. Zhang, S. Zhao, X. Liu, and H. Wang, “Atm-nerf: Ac- celerating training for nerf rendering on mobile devices via geometric regularization,”IEEE Trans. Multimedia, vol. 27, pp. 3279–3293, 2025

  27. [28]

    Gan prior-enhanced novel view synthesis from monocular degraded images,

    K. Guo, Z. Wu, X. Wen, Y . Liu, and H. Chen, “Gan prior-enhanced novel view synthesis from monocular degraded images,”IEEE Trans. Multimedia, 2025

  28. [29]

    pixelNeRF: Neural radiance fields from one or few images,

    A. Yu, V . Ye, M. Tancik, and A. Kanazawa, “pixelNeRF: Neural radiance fields from one or few images,” inProc. IEEE/CVF Conf. Comput. Vis. Pattern Recognit., 2021, pp. 4578–4587

  29. [30]

    IBRNet: Learning multi-view image-based rendering,

    Q. Wang, Z. Wang, K. Genova, P. Srinivasan, H. Zhou, J. T. Barron, R. Martin-Brualla, N. Snavely, and T. Funkhouser, “IBRNet: Learning multi-view image-based rendering,” inProc. IEEE/CVF Conf. Comput. Vis. Pattern Recognit., 2021, pp. 4690–4699

  30. [31]

    MVSNeRF: Fast generalizable radiance field reconstruction from multi- view stereo,

    A. Chen, Z. Xu, F. Zhao, X. Zhang, F. Xiang, J. Yu, and H. Su, “MVSNeRF: Fast generalizable radiance field reconstruction from multi- view stereo,” inProc. IEEE/CVF Int. Conf. Comput. Vis., 2021, pp. 14 124–14 133

  31. [32]

    Mip-splatting: Alias-free 3d gaussian splatting,

    Z. Yu, A. Chen, B. Huang, T. Sattler, and A. Geiger, “Mip-splatting: Alias-free 3d gaussian splatting,” inProc. IEEE/CVF Conf. Comput. Vis. Pattern Recognit., 2024, pp. 19 447–19 456

  32. [33]

    Scaffold- gs: Structured 3d gaussians for view-adaptive rendering,

    T. Lu, M. Yu, L. Xu, Y . Xiangli, L. Wang, D. Lin, and B. Dai, “Scaffold- gs: Structured 3d gaussians for view-adaptive rendering,” inProc. IEEE/CVF Conf. Comput. Vis. Pattern Recognit., 2024, pp. 20 654– 20 664

  33. [34]

    Gaussianpro: 3d gaussian splatting with progressive propagation,

    K. Cheng, X. Long, K. Yang, Y . Yao, W. Yin, Y . Ma, W. Wang, and X. Chen, “Gaussianpro: 3d gaussian splatting with progressive propagation,” inProc. 41st Int. Conf. Mach. Learn., 2024

  34. [35]

    Structgs: Adaptive spherical harmonics and rendering enhancements for superior 3d gaussian splatting,

    Z. Huang, M. Xu, and S. Perry, “Structgs: Adaptive spherical harmonics and rendering enhancements for superior 3d gaussian splatting,”IEEE Trans. Multimedia, 2025

  35. [36]

    Msa-splatting: Multi- scale adaptive gaussian splatting for high-fidelity view synthesis,

    Y . Zhao, G. Chen, B. Wu, Y . Li, and H. Zhang, “Msa-splatting: Multi- scale adaptive gaussian splatting for high-fidelity view synthesis,”IEEE Trans. Multimedia, 2026

  36. [37]

    MVSplat: Efficient 3D gaussian splatting from sparse multi-view images,

    Y . Chen, H. Xu, C. Zheng, B. Zhuang, M. Pollefeys, A. Geiger, T.- J. Cham, and J. Cai, “MVSplat: Efficient 3D gaussian splatting from sparse multi-view images,” inProc. Eur . Conf. Comput. Vis., 2024, pp. 370–386

  37. [38]

    Depth-regularized optimization for 3D gaussian splatting in few-shot images,

    J. Chung, J. Oh, and K. M. Lee, “Depth-regularized optimization for 3D gaussian splatting in few-shot images,” inProc. IEEE/CVF Conf. Comput. Vis. Pattern Recognit., 2024, pp. 811–820

  38. [39]

    Sparse view synthesis using 3d gaussian splatting.arXiv preprint arXiv:2312.00206, 2025

    H. Xiong, S. Muttukuru, R. Upadhyay, P. Chari, and A. Kadambi, “SparseGS: Real-time 360 ◦ sparse view synthesis using gaussian splat- ting,”arXiv preprint arXiv:2312.00206, 2023

  39. [40]

    Generative sparse-view gaussian splatting,

    H. Kong, X. Liu, X. Chen, M. Di, Z. Wang, Z. Wu, B. Zhou, and D. Chen, “Generative sparse-view gaussian splatting,” inProc. IEEE/CVF Conf. Comput. Vis. Pattern Recognit., 2025

  40. [41]

    Putting nerf on a diet: Semantically consistent few-shot view synthesis,

    A. Jain, M. Tancik, and P. Abbeel, “Putting nerf on a diet: Semantically consistent few-shot view synthesis,” inProc. IEEE/CVF Int. Conf. Comput. Vis., 2021, pp. 5885–5894

  41. [42]

    Local light field fusion: Practical view synthesis with prescriptive sampling guidelines,

    B. Mildenhall, P. P. Srinivasan, R. Ortiz-Cayon, N. K. Kalantari, R. Ra- mamoorthi, R. Ng, and A. Kar, “Local light field fusion: Practical view synthesis with prescriptive sampling guidelines,”ACM Trans. Graph., vol. 38, no. 4, pp. 1–14, 2019