pith. machine review for the scientific record. sign in

arxiv: 2605.08739 · v1 · submitted 2026-05-09 · 💻 cs.CV

Recognition: no theorem link

ReorgGS: Equivalent Distribution Reorganization for 3D Gaussian Splatting

Authors on Pith no claims yet

Pith reviewed 2026-05-12 01:56 UTC · model grok-4.3

classification 💻 cs.CV
keywords 3D Gaussian Splattingparameterization degenerationresamplingalpha compositinggradient accessibilityoverlap reductionequivalent distributionkNN covariance
0
0 comments X

The pith

ReorgGS reorganizes the Gaussians of a converged 3D Gaussian Splatting model into a new but distributionally equivalent set by resampling centers and rebuilding covariances, which then optimizes better under the original renderer and loss.

A machine-rendered reading of the paper's core claim, the machinery that carries it, and where it could break.

A converged 3D Gaussian Splatting model can approximate a scene yet remain stuck for further training because high-opacity floaters block gradients through alpha compositing and overlapping clusters create tightly coupled parameters. ReorgGS treats the current set of Gaussians as an empirical probability field over the scene, resamples new center locations from that field, estimates fresh local anisotropic covariances using nearest-neighbor information, and restarts with low opacities. The reorganized model keeps the same scene support but changes the overlap graph itself. This improves gradient accessibility and reduces opacity-weighted redundancy, allowing the same renderer and loss to reach higher quality at a fixed Gaussian count after additional optimization. The paper shows that distributional equivalence does not imply optimization equivalence.

Core claim

ReorgGS demonstrates that a converged 3DGS representation can be rebuilt by resampling centers from the empirical distribution defined by the existing Gaussians, estimating local covariances with kNN, and initializing low opacity, after which continued optimization with the original renderer and loss suppresses floaters, reduces redundant overlap, and improves fitting quality without changing the rendering pipeline.

What carries the argument

The equivalent-distribution reorganization step that resamples centers from the empirical probability field of the current Gaussians, estimates anisotropic covariances via kNN, and rebuilds the visibility structure before resuming training.

If this is right

  • The reorganized model preserves scene support while improving gradient flow through alpha compositing.
  • Opacity-weighted overlap decreases, weakening local parameter coupling during training.
  • Persistent floaters are suppressed under the same additional optimization budget.
  • Rendering cost from redundant overlap is lowered at fixed Gaussian count.
  • Fitting quality improves without requiring changes to the renderer or loss function.

Where Pith is reading between the lines

These are editorial extensions of the paper, not claims the author makes directly.

  • Periodic reorganization during training, rather than only at convergence, might prevent degeneration from occurring in the first place.
  • The same resampling-plus-kNN-covariance idea could apply to other point-based or splat-based scene representations that suffer from parameter coupling.
  • Different center-sampling densities or covariance estimation radii could be tested to trade off between fidelity and optimization speed.
  • The distinction between distributional equivalence and optimization equivalence may apply to any alpha-composited representation whose parameters are optimized jointly.

Load-bearing premise

That resampling centers from the existing Gaussian set and re-estimating their covariances with nearest neighbors will produce a representation that matches the original scene support yet yields better optimization behavior without introducing new artifacts.

What would settle it

Run ReorgGS on a converged 3DGS model, continue optimization for a fixed budget, and measure whether the final PSNR or visual quality is no higher than that obtained by simply continuing optimization on the original model or by applying only an opacity reset.

Figures

Figures reproduced from arXiv: 2605.08739 by Hua Wang, Kaimin Liao, Luchao Wang, Qian Ren, Yaohua Tang, Zhi Chen.

Figure 1
Figure 1. Figure 1: Reorganization unlocks the full potential of post-training. Given a suboptimal baseline (e.g., [PITH_FULL_IMAGE:figures/full_fig_p002_1.png] view at source ↗
Figure 2
Figure 2. Figure 2: Method pipeline and optimization dynamics of ReorgGS. (a) Vanilla 3DGS often degen [PITH_FULL_IMAGE:figures/full_fig_p003_2.png] view at source ↗
Figure 3
Figure 3. Figure 3: Plug-and-play capability and qualitative comparisons across diverse 3DGS baselines. We [PITH_FULL_IMAGE:figures/full_fig_p006_3.png] view at source ↗
Figure 4
Figure 4. Figure 4: Ablation studies of ReorgGS. Top (Effectiveness of Reorg): We demonstrate the necessity [PITH_FULL_IMAGE:figures/full_fig_p007_4.png] view at source ↗
Figure 5
Figure 5. Figure 5: Qualitative comparisons on the Kitchen scene. We evaluate Reorg across six baselines (rows). Columns show the original baseline, standard post-training, Reorg (1 pass), Reorg (3 passes), and Ground Truth. As highlighted in the red insets, standard post-training struggles with the dark bottle boundaries and countertop reflections, whereas Reorg effectively breaks the optimization deadlocks to restore sharp,… view at source ↗
Figure 6
Figure 6. Figure 6: Qualitative comparisons on the Playroom scene. This scene challenges the parameteri￾zation of thin geometric structures (wooden railings) against a plain background. While standard post-training often leaves floating artifacts or broken geometry, Reorg successfully restores continu￾ous and clean foreground structures without polluting the background. 20 [PITH_FULL_IMAGE:figures/full_fig_p020_6.png] view at source ↗
Figure 7
Figure 7. Figure 7: Qualitative comparisons on the Bicycle scene. This scene presents extreme depth occlusions between thin metallic cables and a high-frequency foliage background. Reorg accurately disentangles the foreground objects from the complex background, significantly alleviating the blurring artifacts seen in standard gradient-based finetuning. 21 [PITH_FULL_IMAGE:figures/full_fig_p021_7.png] view at source ↗
read the original abstract

A converged 3D Gaussian Splatting (3DGS) model may approximate the target scene while remaining poorly parameterized for further optimization. We identify this failure mode as \emph{parameterization degeneration}: high-opacity floaters attenuate gradients to true surfaces through alpha compositing, and redundant overlapping clusters create strongly coupled parameter blocks with nearly collinear Jacobian responses. These effects explain why continued optimization can plateau even when the model still contains removable artifacts. We propose ReorgGS, an equivalent distribution reorganization method for converged 3DGS models. ReorgGS treats the existing Gaussian set as an empirical probability field, resamples centers from it, estimates local anisotropic covariances with kNN, initializes low opacity, and continues optimization with the original 3DGS renderer and loss. Unlike opacity reset, which only rescales opacity on the old overlap graph, ReorgGS rebuilds centers, covariances, and visibility structure, thereby changing the graph itself. Our analysis shows that distributional equivalence is not optimization equivalence. The reorganized model preserves scene support while improving gradient accessibility under alpha compositing and reducing opacity-weighted overlap, thereby weakening local parameter coupling during subsequent optimization. Under the same additional optimization budget, ReorgGS improves fitting quality at a fixed Gaussian count, suppresses persistent floaters, and reduces rendering overhead from redundant overlap.

Editorial analysis

A structured set of objections, weighed in public.

Desk editor's note, referee report, simulated authors' rebuttal, and a circularity audit. Tearing a paper down is the easy half of reading it; the pith above is the substance, this is the friction.

Referee Report

2 major / 2 minor

Summary. The manuscript proposes ReorgGS as a reorganization procedure for converged 3D Gaussian Splatting models suffering from parameterization degeneration (high-opacity floaters and redundant overlaps that impede gradients under alpha compositing). The method treats the existing Gaussians as an empirical probability field, resamples centers, estimates local anisotropic covariances via kNN, resets opacities to low values, and resumes optimization with the unmodified 3DGS renderer and loss. The central claim is that this produces a distributionally equivalent representation whose changed overlap graph improves gradient accessibility and reduces parameter coupling, yielding higher fitting quality, fewer persistent floaters, and lower rendering cost at fixed Gaussian count under the same additional optimization budget.

Significance. If the distributional equivalence is tight and the reported gains are reproducible, ReorgGS would offer a lightweight, renderer-agnostic post-processing step that addresses a practical optimization plateau in 3DGS without increasing model size or altering the forward model. The explicit distinction between distributional and optimization equivalence, together with the reuse of the original loss, is a constructive contribution to understanding how representation geometry affects training dynamics in explicit radiance-field methods.

major comments (2)
  1. [Abstract] Abstract and method description: the claim that the reorganized set 'preserves scene support' while only the optimization graph changes rests on the unquantified assertion that kNN covariance estimation introduces negligible distortion to the original overlap structure. No bound or measurement (e.g., initial PSNR/SSIM difference or opacity-weighted overlap metric before any further gradient steps) is supplied to show that the alpha-composited output remains sufficiently close to the converged model; without this, subsequent improvements cannot be attributed solely to better gradient accessibility.
  2. [Method] Method (kNN covariance step): the procedure introduces a free hyper-parameter (neighbor count) whose effect on local anisotropy and potential smoothing of the empirical distribution is not analyzed. If the chosen k alters the support or overlap statistics even modestly, the premise that the reorganized model is 'distributionally equivalent' yet 'optimizationally superior' is undermined, because the initial rendering discrepancy could itself drive the observed changes.
minor comments (2)
  1. The phrase 'empirical probability field' is introduced without an accompanying equation or precise definition of how the discrete Gaussian set is converted into a continuous density for resampling; a short formalization would improve clarity.
  2. [Abstract] The abstract contains several long compound sentences that could be split to improve readability without changing technical content.

Simulated Author's Rebuttal

2 responses · 0 unresolved

We thank the referee for the constructive comments on our manuscript. The points raised regarding quantification of distributional equivalence and analysis of the kNN hyperparameter are well-taken, and we will strengthen the paper accordingly through additional measurements and experiments.

read point-by-point responses
  1. Referee: [Abstract] Abstract and method description: the claim that the reorganized set 'preserves scene support' while only the optimization graph changes rests on the unquantified assertion that kNN covariance estimation introduces negligible distortion to the original overlap structure. No bound or measurement (e.g., initial PSNR/SSIM difference or opacity-weighted overlap metric before any further gradient steps) is supplied to show that the alpha-composited output remains sufficiently close to the converged model; without this, subsequent improvements cannot be attributed solely to better gradient accessibility.

    Authors: We agree that explicit quantification is needed to separate reorganization effects from subsequent optimization gains. In the revised manuscript we will add a dedicated paragraph and accompanying table reporting PSNR and SSIM of the reorganized model immediately after the ReorgGS procedure (before any further gradient steps) relative to the original converged model. We will also introduce an opacity-weighted overlap metric that measures changes in the visibility graph. These results will show that the initial rendering discrepancy is small, thereby supporting the claim that later improvements stem primarily from improved gradient accessibility. revision: yes

  2. Referee: [Method] Method (kNN covariance step): the procedure introduces a free hyper-parameter (neighbor count) whose effect on local anisotropy and potential smoothing of the empirical distribution is not analyzed. If the chosen k alters the support or overlap statistics even modestly, the premise that the reorganized model is 'distributionally equivalent' yet 'optimizationally superior' is undermined, because the initial rendering discrepancy could itself drive the observed changes.

    Authors: We acknowledge that the neighbor count k is a free hyperparameter whose influence on local covariance estimation and distribution fidelity requires explicit examination. The revised version will include an ablation study in the experiments section that varies k over a representative range and reports both the immediate post-reorganization PSNR/SSIM and the final optimized quality. We will also describe our default selection rule (based on local point density) and show that within a practical operating range the distributional distortion remains limited while optimization benefits are preserved. This analysis will directly address the concern that initial discrepancy might confound the reported gains. revision: yes

Circularity Check

0 steps flagged

No circularity: algorithmic reorganization reuses original renderer without reducing claims to fitted inputs or self-citations

full rationale

The paper describes ReorgGS as a procedural method: treat converged Gaussians as an empirical probability field, resample centers, estimate covariances via kNN, reset opacities low, and resume optimization with the unchanged 3DGS renderer and loss. No equations are derived that equate a 'prediction' to its own inputs by construction. No load-bearing self-citations, uniqueness theorems, or ansatz smuggling appear. The claim that distributional equivalence is not optimization equivalence is presented as an analysis of the procedure's effect on gradient accessibility, not as a tautological reduction. The method is self-contained against external benchmarks (original 3DGS renderer/loss) and does not rename known results or fit parameters to the target outcome.

Axiom & Free-Parameter Ledger

1 free parameters · 1 axioms · 0 invented entities

The approach rests on treating the converged Gaussian set as a valid empirical probability field for resampling and on the assumption that kNN provides suitable local covariances; no new free parameters or invented entities are introduced beyond standard 3DGS components.

free parameters (1)
  • kNN neighbor count
    Number of neighbors used for local covariance estimation is an implicit hyperparameter whose value is not specified.
axioms (1)
  • domain assumption The converged Gaussian set forms an empirical probability field from which centers can be resampled while preserving scene support.
    Invoked as the foundation for the center resampling step.

pith-pipeline@v0.9.0 · 5548 in / 1311 out tokens · 58591 ms · 2026-05-12T01:56:11.133584+00:00 · methodology

discussion (0)

Sign in with ORCID, Apple, or X to comment. Anyone can read and Pith papers without signing in.

Reference graph

Works this paper leans on

88 extracted references · 88 canonical work pages · 1 internal anchor

  1. [1]

    Communications of the ACM , volume=

    Nerf: Representing scenes as neural radiance fields for view synthesis , author=. Communications of the ACM , volume=. 2021 , publisher=

  2. [2]

    Nerf++: Analyzing and improving neural radiance fields.arXiv preprint arXiv:2010.07492, 2020

    Nerf++: Analyzing and improving neural radiance fields , author=. arXiv preprint arXiv:2010.07492 , year=

  3. [3]

    Proceedings of the IEEE/CVF international conference on computer vision , pages=

    Mip-nerf: A multiscale representation for anti-aliasing neural radiance fields , author=. Proceedings of the IEEE/CVF international conference on computer vision , pages=

  4. [4]

    Proceedings of the IEEE/CVF conference on computer vision and pattern recognition , pages=

    Mip-nerf 360: Unbounded anti-aliased neural radiance fields , author=. Proceedings of the IEEE/CVF conference on computer vision and pattern recognition , pages=

  5. [5]

    Proceedings of the IEEE/CVF International Conference on Computer Vision , pages=

    Zip-nerf: Anti-aliased grid-based neural radiance fields , author=. Proceedings of the IEEE/CVF International Conference on Computer Vision , pages=

  6. [6]

    ACM transactions on graphics (TOG) , volume=

    Instant neural graphics primitives with a multiresolution hash encoding , author=. ACM transactions on graphics (TOG) , volume=. 2022 , publisher=

  7. [7]

    Proceedings of the IEEE/CVF conference on computer vision and pattern recognition , pages=

    Direct voxel grid optimization: Super-fast convergence for radiance fields reconstruction , author=. Proceedings of the IEEE/CVF conference on computer vision and pattern recognition , pages=

  8. [8]

    European conference on computer vision , pages=

    Tensorf: Tensorial radiance fields , author=. European conference on computer vision , pages=. 2022 , organization=

  9. [9]

    , author=

    3d gaussian splatting for real-time radiance field rendering. , author=. ACM Trans. Graph. , volume=

  10. [10]

    Forty-first International Conference on Machine Learning , year=

    Gaussianpro: 3d gaussian splatting with progressive propagation , author=. Forty-first International Conference on Machine Learning , year=

  11. [11]

    Proceedings of the IEEE/CVF conference on computer vision and pattern recognition , pages=

    Mip-splatting: Alias-free 3d gaussian splatting , author=. Proceedings of the IEEE/CVF conference on computer vision and pattern recognition , pages=

  12. [12]

    Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition , pages=

    Compressed 3d gaussian splatting for accelerated novel view synthesis , author=. Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition , pages=

  13. [13]

    Advances in neural information processing systems , volume=

    Lightgaussian: Unbounded 3d gaussian compression with 15x reduction and 200+ fps , author=. Advances in neural information processing systems , volume=

  14. [14]

    European Conference on Computer Vision , pages=

    Eagles: Efficient accelerated 3d gaussians with lightweight encodings , author=. European Conference on Computer Vision , pages=. 2024 , organization=

  15. [15]

    Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition , pages=

    Compact 3d gaussian representation for radiance field , author=. Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition , pages=

  16. [16]

    Advances in neural information processing systems , volume=

    Contextgs: Compact 3d gaussian splatting with anchor level context model , author=. Advances in neural information processing systems , volume=

  17. [17]

    European conference on computer vision , pages=

    Mini-splatting: Representing scenes with a constrained number of gaussians , author=. European conference on computer vision , pages=. 2024 , organization=

  18. [18]

    ACM Transactions on Graphics (TOG) , volume=

    Gaussian opacity fields: Efficient adaptive surface reconstruction in unbounded scenes , author=. ACM Transactions on Graphics (TOG) , volume=. 2024 , publisher=

  19. [19]

    SIGGRAPH Asia 2024 Conference Papers , pages=

    Taming 3dgs: High-quality radiance fields with limited resources , author=. SIGGRAPH Asia 2024 Conference Papers , pages=

  20. [20]

    Proceedings of the Computer Vision and Pattern Recognition Conference , pages=

    Efficient decoupled feature 3d gaussian splatting via hierarchical compression , author=. Proceedings of the Computer Vision and Pattern Recognition Conference , pages=

  21. [21]

    Proceedings of the IEEE/CVF International Conference on Computer Vision , pages=

    Compression of 3d gaussian splatting with optimized feature planes and standard video codecs , author=. Proceedings of the IEEE/CVF International Conference on Computer Vision , pages=

  22. [22]

    arXiv preprint arXiv:2406.01467 (2024)

    Rade-gs: Rasterizing depth in gaussian splatting , author=. arXiv preprint arXiv:2406.01467 , year=

  23. [23]

    arXiv preprint arXiv:2403.16964 , year=

    Gsdf: 3dgs meets sdf for improved rendering and reconstruction , author=. arXiv preprint arXiv:2403.16964 , year=

  24. [24]

    European Conference on Computer Vision , pages=

    Pixel-gs: Density control with pixel-aware gradient for 3d gaussian splatting , author=. European Conference on Computer Vision , pages=. 2024 , organization=

  25. [25]

    Proceedings of the 32nd ACM International Conference on Multimedia , pages=

    Absgs: Recovering fine details in 3d gaussian splatting , author=. Proceedings of the 32nd ACM International Conference on Multimedia , pages=

  26. [27]

    Proceedings of the Computer Vision and Pattern Recognition Conference , pages=

    Steepest descent density control for compact 3D Gaussian splatting , author=. Proceedings of the Computer Vision and Pattern Recognition Conference , pages=

  27. [28]

    Proceedings of the Computer Vision and Pattern Recognition Conference , pages=

    Dashgaussian: Optimizing 3d gaussian splatting in 200 seconds , author=. Proceedings of the Computer Vision and Pattern Recognition Conference , pages=

  28. [29]

    Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition , pages=

    Dust3r: Geometric 3d vision made easy , author=. Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition , pages=

  29. [30]

    ACM SIGGRAPH 2024 conference papers , pages=

    2d gaussian splatting for geometrically accurate radiance fields , author=. ACM SIGGRAPH 2024 conference papers , pages=

  30. [31]

    European Conference on Computer Vision , pages=

    Grounding image matching in 3d with mast3r , author=. European Conference on Computer Vision , pages=. 2024 , organization=

  31. [32]

    Splatt3r: Zero-shot gaussian splatting from uncalibrated image pairs.arXiv preprint arXiv:2408.13912, 2024

    Splatt3r: Zero-shot gaussian splatting from uncalibrated image pairs , author=. arXiv preprint arXiv:2408.13912 , year=

  32. [33]

    Proceedings of the IEEE/CVF conference on computer vision and pattern recognition , pages=

    Dngaussian: Optimizing sparse-view 3d gaussian radiance fields with global-local depth normalization , author=. Proceedings of the IEEE/CVF conference on computer vision and pattern recognition , pages=

  33. [34]

    Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition , pages=

    Sugar: Surface-aligned gaussian splatting for efficient 3d mesh reconstruction and high-quality mesh rendering , author=. Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition , pages=

  34. [35]

    arXiv preprint arXiv:2312.00846 , year=

    Neusg: Neural implicit surface reconstruction with 3d gaussian splatting guidance , author=. arXiv preprint arXiv:2312.00846 , year=

  35. [36]

    Advances in Neural Information Processing Systems , volume=

    Genwarp: Single image to novel views with semantic-preserving generative warping , author=. Advances in Neural Information Processing Systems , volume=

  36. [37]

    ACM Transactions on Graphics (ToG) , volume=

    Deep blending for free-viewpoint image-based rendering , author=. ACM Transactions on Graphics (ToG) , volume=. 2018 , publisher=

  37. [38]

    ACM Transactions on Graphics (ToG) , volume=

    Tanks and temples: Benchmarking large-scale scene reconstruction , author=. ACM Transactions on Graphics (ToG) , volume=. 2017 , publisher=

  38. [39]

    CVPR , year=

    3DMatch: Learning Local Geometric Descriptors from RGB-D Reconstructions , author=. CVPR , year=

  39. [40]

    European Conference on Computer Vision (ECCV) , year=

    Pan, Linfei and Barath, Daniel and Pollefeys, Marc and Sch\". European Conference on Computer Vision (ECCV) , year=

  40. [41]

    Proceedings of the IEEE international conference on computer vision , pages=

    Global structure-from-motion by similarity averaging , author=. Proceedings of the IEEE international conference on computer vision , pages=

  41. [42]

    IEEE Transactions on Visualization and Computer Graphics , year=

    Pgsr: Planar-based gaussian splatting for efficient and high-fidelity surface reconstruction , author=. IEEE Transactions on Visualization and Computer Graphics , year=

  42. [43]

    arXiv preprint arXiv:2410.04646 , year=

    Mode-GS: Monocular Depth Guided Anchored 3D Gaussian Splatting for Robust Ground-View Scene Rendering , author=. arXiv preprint arXiv:2410.04646 , year=

  43. [44]

    Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition , pages=

    Depth-regularized optimization for 3d gaussian splatting in few-shot images , author=. Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition , pages=

  44. [45]

    ACM Transactions on Graphics (TOG) , volume=

    A hierarchical 3d gaussian representation for real-time rendering of very large datasets , author=. ACM Transactions on Graphics (TOG) , volume=. 2024 , publisher=

  45. [46]

    Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition , pages=

    Scaffold-gs: Structured 3d gaussians for view-adaptive rendering , author=. Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition , pages=

  46. [47]

    Depth Anything V2

    Depth Anything V2 , author=. arXiv:2406.09414 , year=

  47. [48]

    Proceedings of the Computer Vision and Pattern Recognition Conference , pages=

    Vggt: Visual geometry grounded transformer , author=. Proceedings of the Computer Vision and Pattern Recognition Conference , pages=

  48. [49]

    InfiniteVGGT: Visual geometry grounded transformer for endless streams

    InfiniteVGGT: Visual Geometry Grounded Transformer for Endless Streams , author=. arXiv preprint arXiv:2601.02281 , year=

  49. [50]

    VGGT-Long: Chunk it, Loop it, Align it - Pushing VGGT’s Limits on Kilometer-scale Long RGB Sequences,

    VGGT-Long: Chunk it, Loop it, Align it--Pushing VGGT's Limits on Kilometer-scale Long RGB Sequences , author=. arXiv preprint arXiv:2507.16443 , year=

  50. [51]

    2024 International Conference on 3D Vision (3DV) , pages=

    Dynamic 3d gaussians: Tracking by persistent dynamic view synthesis , author=. 2024 International Conference on 3D Vision (3DV) , pages=. 2024 , organization=

  51. [52]

    Proceedings of the IEEE/CVF conference on computer vision and pattern recognition , pages=

    4d gaussian splatting for real-time dynamic scene rendering , author=. Proceedings of the IEEE/CVF conference on computer vision and pattern recognition , pages=

  52. [53]

    ACM Transactions on Graphics (TOG) , volume=

    Stopthepop: Sorted gaussian splatting for view-consistent real-time rendering , author=. ACM Transactions on Graphics (TOG) , volume=. 2024 , publisher=

  53. [54]

    Proceedings of the IEEE/CVF conference on computer vision and pattern recognition , pages=

    Sugar: Surface-aligned gaussian splatting for efficient 3d mesh reconstruction and high-quality mesh rendering , author=. Proceedings of the IEEE/CVF conference on computer vision and pattern recognition , pages=

  54. [55]

    IEEE Conference on Computer Vision and Pattern Recognition (CVPR) , year=

    The Unreasonable Effectiveness of Deep Features as a Perceptual Metric , author=. IEEE Conference on Computer Vision and Pattern Recognition (CVPR) , year=

  55. [56]

    Chen, Hanlin and others , journal=

  56. [57]

    Fang, Jiemin and others , booktitle=

  57. [58]

    Dynamic 3D

    Luiten, Jonathon and others , journal=. Dynamic 3D

  58. [59]

    Radonjic, M and others , journal=

  59. [60]

    Proceedings of the 11th annual conference on Computer graphics and interactive techniques , pages=

    Compositing digital images , author=. Proceedings of the 11th annual conference on Computer graphics and interactive techniques , pages=

  60. [61]

    The Journal of Machine Learning Research , volume=

    New insights and perspectives on the natural gradient method , author=. The Journal of Machine Learning Research , volume=

  61. [64]

    Advances in Neural Information Processing Systems , volume=

    3d gaussian splatting as markov chain monte carlo , author=. Advances in Neural Information Processing Systems , volume=

  62. [65]

    Mip-nerf 360: Unbounded anti-aliased neural radiance fields

    Jonathan T Barron, Ben Mildenhall, Dor Verbin, Pratul P Srinivasan, and Peter Hedman. Mip-nerf 360: Unbounded anti-aliased neural radiance fields. In Proceedings of the IEEE/CVF conference on computer vision and pattern recognition, pages 5470--5479, 2022

  63. [66]

    Dashgaussian: Optimizing 3d gaussian splatting in 200 seconds

    Youyu Chen, Junjun Jiang, Kui Jiang, Xiao Tang, Zhihao Li, Xianming Liu, and Yinyu Nie. Dashgaussian: Optimizing 3d gaussian splatting in 200 seconds. In Proceedings of the Computer Vision and Pattern Recognition Conference, pages 11146--11155, 2025

  64. [67]

    Gaussianpro: 3d gaussian splatting with progressive propagation

    Kai Cheng, Xiaoxiao Long, Kaizhi Yang, Yao Yao, Wei Yin, Yuexin Ma, Wenping Wang, and Xuejin Chen. Gaussianpro: 3d gaussian splatting with progressive propagation. In Forty-first International Conference on Machine Learning, 2024

  65. [68]

    Efficient decoupled feature 3d gaussian splatting via hierarchical compression

    Zhenqi Dai, Ting Liu, and Yanning Zhang. Efficient decoupled feature 3d gaussian splatting via hierarchical compression. In Proceedings of the Computer Vision and Pattern Recognition Conference, pages 11156--11166, 2025

  66. [69]

    Lightgaussian: Unbounded 3d gaussian compression with 15x reduction and 200+ fps

    Zhiwen Fan, Kevin Wang, Kairun Wen, Zehao Zhu, Dejia Xu, and Zhangyang Wang. Lightgaussian: Unbounded 3d gaussian compression with 15x reduction and 200+ fps. Advances in neural information processing systems, 37: 0 140138--140158, 2024

  67. [70]

    Mini-splatting: Representing scenes with a constrained number of gaussians

    Guangchi Fang and Bing Wang. Mini-splatting: Representing scenes with a constrained number of gaussians. In European conference on computer vision, pages 165--181. Springer, 2024

  68. [71]

    Eagles: Efficient accelerated 3d gaussians with lightweight encodings

    Sharath Girish, Kamal Gupta, and Abhinav Shrivastava. Eagles: Efficient accelerated 3d gaussians with lightweight encodings. In European Conference on Computer Vision, pages 54--71. Springer, 2024

  69. [72]

    2d gaussian splatting for geometrically accurate radiance fields

    Binbin Huang, Zehao Yu, Anpei Chen, Andreas Geiger, and Shenghua Gao. 2d gaussian splatting for geometrically accurate radiance fields. In ACM SIGGRAPH 2024 conference papers, pages 1--11, 2024

  70. [73]

    3d gaussian splatting for real-time radiance field rendering

    Bernhard Kerbl, Georgios Kopanas, Thomas Leimk \"u hler, and George Drettakis. 3d gaussian splatting for real-time radiance field rendering. ACM Trans. Graph., 42 0 (4): 0 139--1, 2023

  71. [74]

    3d gaussian splatting as markov chain monte carlo

    Shakiba Kheradmand, Daniel Rebain, Gopal Sharma, Weiwei Sun, Yang-Che Tseng, Hossam Isack, Abhishek Kar, Andrea Tagliasacchi, and Kwang Moo Yi. 3d gaussian splatting as markov chain monte carlo. Advances in Neural Information Processing Systems, 37: 0 80965--80986, 2024

  72. [75]

    Tanks and temples: Benchmarking large-scale scene reconstruction

    Arno Knapitsch, Jaesik Park, Qian-Yi Zhou, and Vladlen Koltun. Tanks and temples: Benchmarking large-scale scene reconstruction. ACM Transactions on Graphics (ToG), 36 0 (4): 0 1--13, 2017

  73. [76]

    Compact 3d gaussian representation for radiance field

    Joo Chan Lee, Daniel Rho, Xiangyu Sun, Jong Hwan Ko, and Eunbyung Park. Compact 3d gaussian representation for radiance field. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages 21719--21728, 2024

  74. [77]

    Compression of 3d gaussian splatting with optimized feature planes and standard video codecs

    Soonbin Lee, Fangwen Shu, Yago Sanchez, Thomas Schierl, and Cornelius Hellge. Compression of 3d gaussian splatting with optimized feature planes and standard video codecs. In Proceedings of the IEEE/CVF International Conference on Computer Vision, pages 25496--25505, 2025

  75. [78]

    Litegs: A high-performance framework to train 3dgs in subminutes via system and algorithm codesign

    Kaimin Liao, Hua Wang, Zhi Chen, Luchao Wang, and Yaohua Tang. Litegs: A high-performance framework to train 3dgs in subminutes via system and algorithm codesign. arXiv preprint arXiv:2503.01199, 2025

  76. [79]

    Scaffold-gs: Structured 3d gaussians for view-adaptive rendering

    Tao Lu, Mulin Yu, Linning Xu, Yuanbo Xiangli, Limin Wang, Dahua Lin, and Bo Dai. Scaffold-gs: Structured 3d gaussians for view-adaptive rendering. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages 20654--20664, 2024

  77. [80]

    Taming 3dgs: High-quality radiance fields with limited resources

    Saswat Subhajyoti Mallick, Rahul Goel, Bernhard Kerbl, Markus Steinberger, Francisco Vicente Carrasco, and Fernando De La Torre. Taming 3dgs: High-quality radiance fields with limited resources. In SIGGRAPH Asia 2024 Conference Papers, pages 1--11, 2024

  78. [81]

    Nerf: Representing scenes as neural radiance fields for view synthesis

    Ben Mildenhall, Pratul P Srinivasan, Matthew Tancik, Jonathan T Barron, Ravi Ramamoorthi, and Ren Ng. Nerf: Representing scenes as neural radiance fields for view synthesis. Communications of the ACM, 65 0 (1): 0 99--106, 2021

  79. [82]

    Compressed 3d gaussian splatting for accelerated novel view synthesis

    Simon Niedermayr, Josef Stumpfegger, and R \"u diger Westermann. Compressed 3d gaussian splatting for accelerated novel view synthesis. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages 10349--10358, 2024

  80. [83]

    Stablegs: A floater-free framework for 3d gaussian splatting

    Luchao Wang, Qian Ren, Kaimin Liao, Hua Wang, Zhi Chen, and Yaohua Tang. Stablegs: A floater-free framework for 3d gaussian splatting. arXiv preprint arXiv:2503.18458, 2025 a

Showing first 80 references.