pith. machine review for the scientific record. sign in

arxiv: 2604.14928 · v2 · submitted 2026-04-16 · 💻 cs.CV · cs.GR

Recognition: unknown

Hybrid Latents: Geometry-Appearance-Aware Surfel Splatting

Authors on Pith no claims yet

Pith reviewed 2026-05-10 11:30 UTC · model grok-4.3

classification 💻 cs.CV cs.GR
keywords hybrid gaussian splattingnovel view synthesisgeometry appearance separationhash grid featuresfrequency decompositionsurfel splattingprobabilistic pruningopacity loss
0
0 comments X

The pith

Hybrid per-Gaussian latents with hash grids separate geometry from appearance in 2D surfel splatting for higher fidelity with far fewer primitives.

A machine-rendered reading of the paper's core claim, the machinery that carries it, and where it could break.

The paper develops a hybrid radiance representation that pairs per-Gaussian latent vectors with hash-grid features inside a 2D Gaussian splatting pipeline. This pairing steers the optimizer to assign low-frequency components to geometry and high-frequency components to appearance, reducing the chance that texture details will hide shape errors. Hard opacity falloffs on the Gaussians and a sparsity-inducing loss then prune away redundant elements. A reader would care because the result is a compact scene model that renders novel views more accurately and efficiently than standard Gaussian methods while using roughly one-tenth the number of primitives on both synthetic and real data.

Core claim

The authors show that a hybrid Gaussian-hash-grid model, augmented with per-Gaussian latent features, hard opacity falloffs, and probabilistic pruning driven by a binary cross-entropy sparsity loss, produces 2D Gaussian scene reconstructions that disentangle geometry from appearance more effectively than prior Gaussian splatting approaches. This frequency-biased decomposition yields superior reconstruction fidelity together with an order of magnitude reduction in the number of active primitives.

What carries the argument

The hybrid latent mechanism in which each Gaussian carries its own learned latent vector that is queried in tandem with hash-grid features to bias the optimizer toward explicit low- versus high-frequency separation between geometry and appearance.

If this is right

  • High-frequency textures become less able to compensate for geometric inaccuracies.
  • Rendering speed increases because far fewer Gaussians remain active after pruning.
  • A minimal set of primitives suffices to represent the full scene without loss of fidelity.
  • The same separation principle applies to both synthetic and real-world multi-view capture.

Where Pith is reading between the lines

These are editorial extensions of the paper, not claims the author makes directly.

  • The pruning step could be adapted to other point-based or surfel representations to lower memory use in large-scale scenes.
  • Explicit frequency separation might reduce temporal artifacts if the model is extended to dynamic video input.
  • Storage and transmission costs for 3D scene assets could drop substantially if the compact primitive count generalizes beyond the tested datasets.

Load-bearing premise

The combination of per-Gaussian latents, hash-grid features, and hard opacity falloffs will consistently push the optimizer toward reliable frequency separation without creating new artifacts or demanding heavy scene-by-scene tuning.

What would settle it

On a held-out scene with documented geometric misalignment, measure whether PSNR and geometric error metrics improve while the final active primitive count remains near one-tenth of a standard Gaussian baseline; if quality gains disappear or primitive count does not shrink, the separation claim fails.

Figures

Figures reproduced from arXiv: 2604.14928 by Klaus Engel, Neel Kelkar, R\"udiger Westermann, Simon Niedermayr.

Figure 1
Figure 1. Figure 1: Hybrid Latents disentangle low-frequency scene components (via per-surfel latent features) from high-frequency texture details (via a hash-grid). They achieve superior visual quality with fewer surfels and improve geometric fidelity, shown by an accurate silhouette (vs. ground truth in white) and depth reconstruction (right). Abstract. We introduce a hybrid Gaussian-hash-grid radiance repre￾sentation for r… view at source ↗
Figure 2
Figure 2. Figure 2: Method Overview: Features from per-surfel representations and a single￾resolution hash-grid are blended via volumetric rendering along each view ray. The blended feature, augmented with viewing direction, is processed by an MLP to output color. 3 Method Our method augments 2D surfels with a learnable per-surfel latent and modu￾lates this representation with spatial samples from a 3D hash-grid latent. In th… view at source ↗
Figure 3
Figure 3. Figure 3: Beta kernels and their effects. 3.2 Deformable beta kernels Deformable beta kernels [16] replace the Gaussian kernel of 3DGS with a Beta function that has a learnable parameter b, allowing it to represent both Gaussian￾like and opaque ellipsoidal shape functions. We replace the Gaussian kernel used in 2DGS with a beta kernel \mathcal {B}(x;b) = (1-x)^{\beta {b}},\quad \beta (b)=4\sigma (b),\quad x \in [0,1… view at source ↗
Figure 4
Figure 4. Figure 4: Qualitative comparison of Beta Splatting [ [PITH_FULL_IMAGE:figures/full_fig_p011_4.png] view at source ↗
Figure 5
Figure 5. Figure 5: Per-primitive information with increasing sparsity [PITH_FULL_IMAGE:figures/full_fig_p011_5.png] view at source ↗
Figure 6
Figure 6. Figure 6: Qualitative results for 2DGS [11], Beta Splatting [16], NeST [33] and Ours from the MipNeRF Dataset [1]. The images are contrast-enhanced [PITH_FULL_IMAGE:figures/full_fig_p012_6.png] view at source ↗
Figure 7
Figure 7. Figure 7: Per-primitive features vs texturing methods [PITH_FULL_IMAGE:figures/full_fig_p018_7.png] view at source ↗
Figure 8
Figure 8. Figure 8: Using a single hash-grid layer vs pure per-primitive features [PITH_FULL_IMAGE:figures/full_fig_p020_8.png] view at source ↗
Figure 9
Figure 9. Figure 9: The separation of features into low-frequency structural components [PITH_FULL_IMAGE:figures/full_fig_p021_9.png] view at source ↗
read the original abstract

We introduce a hybrid Gaussian-hash-grid radiance representation for reconstructing 2D Gaussian scene models from multi-view images. Similar to NeST splatting, our approach reduces the entanglement between geometry and appearance common in NeRF-based models, but adds per-Gaussian latent features alongside hash-grid features to bias the optimizer toward a separation of low- and high-frequency scene components. This explicit frequency-based decomposition reduces the tendency of high-frequency texture to compensate for geometric errors. Encouraging Gaussians with hard opacity falloffs further strengthens the separation between geometry and appearance, improving both geometry reconstruction and rendering efficiency. Finally, probabilistic pruning combined with a sparsity-inducing BCE opacity loss allows redundant Gaussians to be turned off, yielding a minimal set of Gaussians sufficient to represent the scene. Using both synthetic and real-world datasets, we compare against the state of the art in Gaussian-based novel-view synthesis and demonstrate superior reconstruction fidelity with an order of magnitude fewer primitives.

Editorial analysis

A structured set of objections, weighed in public.

Desk editor's note, referee report, simulated authors' rebuttal, and a circularity audit. Tearing a paper down is the easy half of reading it; the pith above is the substance, this is the friction.

Referee Report

2 major / 2 minor

Summary. The paper introduces a hybrid Gaussian-hash-grid radiance representation for 2D Gaussian surfel splatting from multi-view images. Per-Gaussian latent features are combined with hash-grid features to bias the optimizer toward explicit low/high-frequency separation, reducing the tendency of high-frequency appearance to compensate for geometric errors. Hard opacity falloffs are encouraged to further disentangle geometry and appearance, while probabilistic pruning with a BCE sparsity-inducing loss minimizes the number of active primitives. Experiments on synthetic and real-world datasets report superior novel-view synthesis fidelity relative to prior Gaussian-based methods, achieved with an order of magnitude fewer primitives.

Significance. If the intended frequency separation is reliably achieved without introducing new artifacts or per-scene tuning, the work offers a principled route to compact, high-fidelity 3D scene representations that improve both reconstruction accuracy and rendering efficiency. The explicit mechanism for preventing textural compensation of geometric errors, together with the reported primitive reduction, would be a meaningful advance for practical novel-view synthesis and downstream applications in computer vision.

major comments (2)
  1. Methods (hybrid latents and frequency decomposition): the central claim that per-Gaussian latent features plus hash-grid features enforce a reliable low/high-frequency split (preventing high-frequency texture from compensating for geometric errors) is not supported by direct evidence such as frequency spectra of the learned components, an ablation isolating the latent/hash split, or failure-case analysis. Without such verification the reported fidelity gains and primitive reduction cannot be confidently attributed to the intended decomposition rather than an entangled solution or other implementation choices.
  2. Experiments (quantitative results and pruning): the headline result of superior reconstruction with ~10x fewer primitives relies on the hard opacity falloffs and BCE pruning successfully turning off redundant Gaussians without degrading quality. No analysis is provided of per-scene hyperparameter sensitivity, artifact introduction in complex geometry, or comparison against a baseline that uses the same pruning but without the hybrid latents, leaving the robustness of the efficiency claim unverified.
minor comments (2)
  1. Clarify in the introduction how the hybrid latents differ in mechanism from the NeST splatting baseline referenced in the abstract.
  2. Ensure the equations defining the hard opacity falloff and the BCE sparsity loss are explicitly numbered and cross-referenced in the text and experiments.

Simulated Author's Rebuttal

2 responses · 0 unresolved

We thank the referee for the constructive and detailed feedback. We address each major comment below, providing clarifications on our design choices and outlining specific revisions to strengthen the evidence and robustness analysis in the manuscript.

read point-by-point responses
  1. Referee: Methods (hybrid latents and frequency decomposition): the central claim that per-Gaussian latent features plus hash-grid features enforce a reliable low/high-frequency split (preventing high-frequency texture from compensating for geometric errors) is not supported by direct evidence such as frequency spectra of the learned components, an ablation isolating the latent/hash split, or failure-case analysis. Without such verification the reported fidelity gains and primitive reduction cannot be confidently attributed to the intended decomposition rather than an entangled solution or other implementation choices.

    Authors: We agree that the original manuscript lacks direct verification of the intended frequency separation. The hybrid representation was designed to encourage low-frequency geometry via the per-Gaussian latents and high-frequency appearance via the hash grid, with results suggesting reduced compensation for geometric errors. However, without explicit ablations or spectral analysis, attribution remains indirect. In the revision we will add: (i) an ablation isolating the hybrid components (full model vs. hash-grid only vs. per-Gaussian latents only), (ii) frequency-domain analysis of the learned features where computationally feasible, and (iii) discussion of any observed failure cases in complex scenes. revision: yes

  2. Referee: Experiments (quantitative results and pruning): the headline result of superior reconstruction with ~10x fewer primitives relies on the hard opacity falloffs and BCE pruning successfully turning off redundant Gaussians without degrading quality. No analysis is provided of per-scene hyperparameter sensitivity, artifact introduction in complex geometry, or comparison against a baseline that uses the same pruning but without the hybrid latents, leaving the robustness of the efficiency claim unverified.

    Authors: We acknowledge that the efficiency claims would be more robust with additional controls. While the pruning and hard opacity mechanisms are central to minimizing primitives, we did not report sensitivity sweeps or the requested baseline. In the revised version we will expand the experiments to include: (i) per-scene hyperparameter sensitivity for the BCE sparsity weight and opacity falloff threshold, (ii) qualitative/quantitative assessment of artifacts on scenes with complex geometry, and (iii) a direct comparison against a non-hybrid baseline that applies identical pruning and opacity regularization. revision: yes

Circularity Check

0 steps flagged

No significant circularity; method components are independently defined and externally evaluated.

full rationale

The paper introduces a hybrid Gaussian-hash-grid representation with per-Gaussian latent features, hash-grid features, hard opacity falloffs, and BCE-based probabilistic pruning as novel mechanisms to encourage frequency separation and sparsity. These elements are presented as design choices motivated by reducing geometry-appearance entanglement, not as quantities derived from or fitted to the final fidelity metrics. Evaluation occurs on external synthetic and real-world datasets against prior Gaussian splatting methods, with no equations or claims in the abstract reducing the reported primitive reduction or quality gains to a self-referential fit or self-citation chain. The derivation chain remains self-contained against external benchmarks.

Axiom & Free-Parameter Ledger

0 free parameters · 1 axioms · 0 invented entities

The approach rests on the assumption that frequency separation can be encouraged through architectural choices and loss terms without explicit supervision on geometry or appearance.

axioms (1)
  • domain assumption Multi-view images provide sufficient constraints to recover both geometry and appearance when the representation encourages frequency separation.
    Implicit in the comparison to NeRF-based models and the claim of reduced entanglement.

pith-pipeline@v0.9.0 · 5469 in / 1189 out tokens · 109499 ms · 2026-05-10T11:30:08.757140+00:00 · methodology

discussion (0)

Sign in with ORCID, Apple, or X to comment. Anyone can read and Pith papers without signing in.

Reference graph

Works this paper leans on

35 extracted references · 12 canonical work pages

  1. [1]

    In: 2022 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR)

    Barron, J.T., Mildenhall, B., Verbin, D., Srinivasan, P.P., Hedman, P.: Mip-NeRF 360: Unbounded Anti-Aliased Neural Radiance Fields. In: 2022 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR). pp. 5460–5469. IEEE, New Orleans, LA, USA (Jun 2022).https: / / doi . org / 10 . 1109 / CVPR52688 . 2022 . 00539,https : / / ieeexplore . ieee.o...

  2. [2]

    In: CVPR (2025) 4

    Chao, B., Tseng, H.Y., Porzi, L., Gao, C., Li, T., Li, Q., Saraf, A., Huang, J.B., Kopf, J., Wetzstein, G., Kim, C.: Textured gaussians for enhanced 3d scene appearance modeling. In: CVPR (2025) 4

  3. [3]

    In: Proceedings of the Computer Vision and Pattern Recognition Conference

    Chao, B., Tseng, H.Y., Porzi, L., Gao, C., Li, T., Li, Q., Saraf, A., Huang, J.B., Kopf, J., Wetzstein, G., et al.: Textured gaussians for enhanced 3d scene appearance modeling. In: Proceedings of the Computer Vision and Pattern Recognition Conference. pp. 8964–8974 (2025) 2

  4. [4]

    IEEE Transactions on Pattern Anal- ysis and Machine Intelligence (2025) 4

    Chen, Y., Wu, Q., Lin, W., Harandi, M., Cai, J.: Hac++: Towards 100x compression of 3d gaussian splatting. IEEE Transactions on Pattern Anal- ysis and Machine Intelligence (2025) 4

  5. [5]

    ACM Transactions on Graphics (TOG)43(4), 1–13 (2024) 22

    Duckworth, D., Hedman, P., Reiser, C., Zhizhin, P., Thibert, J.F., Lučić, M., Szeliski, R., Barron, J.T.: Smerf: Streamable memory efficient radiance fields for real-time large-scene exploration. ACM Transactions on Graphics (TOG)43(4), 1–13 (2024) 22

  6. [6]

    In: European conference on computer vision

    Fang, G., Wang, B.: Mini-splatting: Representing scenes with a constrained number of gaussians. In: European conference on computer vision. pp. 165–

  7. [7]

    In: Proceedings of the IEEE/CVF International Conference on Computer Vision

    Girish, S., Shrivastava, A., Gupta, K.: Shacira: Scalable hash-grid compres- sion for implicit neural representations. In: Proceedings of the IEEE/CVF International Conference on Computer Vision. pp. 17513–17524 (2023) 22

  8. [8]

    ACM Trans

    Guédon, A., Gomez, D., Maruani, N., Gong, B., Drettakis, G., Ovsjanikov, M.: Milo: Mesh-in-the-loop gaussian splatting for detailed and efficient surface reconstruction. ACM Trans. Graph.44(6) (Dec 2025).https: //doi.org/10.1145/3763339,https://doi.org/10.1145/376333914

  9. [9]

    In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition

    Hamdi, A., Melas-Kyriazi, L., Mai, J., Qian, G., Liu, R., Vondrick, C., Ghanem, B., Vedaldi, A.:Ges: Generalized exponential splatting for efficient radiance field rendering. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. pp. 19812–19822 (2024) 4

  10. [10]

    arXiv (2025) 14

    Held, J., Vandeghen, R., Deliege, A., Hamdi, A., Cioppa, A., Giancola, S., Vedaldi, A., Ghanem, B., Tagliasacchi, A., Van Droogenbroeck, M.: Triangle splatting for real-time radiance field rendering. arXiv (2025) 14

  11. [11]

    2d gaussian splatting for geometrically accurate radiance fields

    Huang, B., Yu, Z., Chen, A., Geiger, A., Gao, S.: 2d gaussian splatting for geometrically accurate radiance fields. In: SIGGRAPH 2024 Conference Papers. Association for Computing Machinery (2024).https://doi.org/ 10.1145/3641519.36574282, 7, 12 16 N. Kelkar et al

  12. [12]

    In: ACM SIGGRAPH 2024 confer- ence papers

    Huang, B., Yu, Z., Chen, A., Geiger, A., Gao, S.: 2d gaussian splatting for geometrically accurate radiance fields. In: ACM SIGGRAPH 2024 confer- ence papers. pp. 1–11 (2024) 4, 9

  13. [13]

    In: 2014 IEEE Conference on Computer Vision and Pattern Recognition

    Jensen, R., Dahl, A., Vogiatzis, G., Tola, E., Aanæs, H.: Large scale multi- view stereopsis evaluation. In: 2014 IEEE Conference on Computer Vision and Pattern Recognition. pp. 406–413. IEEE (2014) 9

  14. [14]

    https://doi.org/10.1145/3592433 Xiaonan Kong and Riley G

    Kerbl, B., Kopanas, G., Leimkuehler, T., Drettakis, G.: 3D Gaussian Splat- ting for Real-Time Radiance Field Rendering. ACM Trans. Graph.42(4) (Jul 2023).https://doi.org/10.1145/3592433,https://doi.org/10. 1145/35924332, 3, 5, 6, 7, 8, 9

  15. [15]

    In: Advances in Neural Information Processing Systems (NeurIPS) (2024), spotlight Presentation 3, 7

    Kheradmand, S., Rebain, D., Sharma, G., Sun, W., Tseng, Y.C., Isack, H., Kar, A., Tagliasacchi, A., Yi, K.M.: 3d gaussian splatting as markov chain monte carlo. In: Advances in Neural Information Processing Systems (NeurIPS) (2024), spotlight Presentation 3, 7

  16. [16]

    In: ACM SIGGRAPH 2025 Conference Proceedings (SIGGRAPH ’25)

    Liu, R., Sun, D., Chen, M., Wang, Y., Feng, A.: Deformable beta splatting. In: ACM SIGGRAPH 2025 Conference Proceedings (SIGGRAPH ’25). As- sociation for Computing Machinery, New York, NY, USA (August 2025). https://doi.org/10.1145/3XXXXXX.3XXXXXX2, 3, 4, 6, 9, 10, 11, 12

  17. [17]

    Communications of the ACM65(1), 99–106 (2021), publisher: ACM New York, NY, USA 2, 3, 9

    Mildenhall, B., Srinivasan, P.P., Tancik, M., Barron, J.T., Ramamoorthi, R., Ng, R.: Nerf: Representing scenes as neural radiance fields for view synthesis. Communications of the ACM65(1), 99–106 (2021), publisher: ACM New York, NY, USA 2, 3, 9

  18. [18]

    ACM Transactions on Graphics (ToG)41(4), 1–15 (2022), publisher: ACM New York, NY, USA 3, 7

    Müller, T., Evans, A., Schied, C., Keller, A.: Instant neural graphics primi- tives with a multiresolution hash encoding. ACM Transactions on Graphics (ToG)41(4), 1–15 (2022), publisher: ACM New York, NY, USA 3, 7

  19. [19]

    In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR)

    Niedermayr, S., Stumpfegger, J., Westermann, R.: Compressed 3d gaus- sian splatting for accelerated novel view synthesis. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR). pp. 10349–10358 (June 2024) 4, 22

  20. [20]

    arXiv preprint arXiv:2512.02621 (2025) 2

    Papantonakis, P., Kopanas, G., Durand, F., Drettakis, G.: Content-aware texturing for gaussian splatting. arXiv preprint arXiv:2512.02621 (2025) 2

  21. [21]

    In: Proceedings of the 27th annual conference on Computer graphics and interactive techniques

    Pfister, H., Zwicker, M., Van Baar, J., Gross, M.: Surfels: Surface elements as rendering primitives. In: Proceedings of the 27th annual conference on Computer graphics and interactive techniques. pp. 335–342 (2000) 5

  22. [22]

    ACM Transactions on Graphics (ToG)42(4), 1–12 (2023) 22

    Reiser, C., Szeliski, R., Verbin, D., Srinivasan, P., Mildenhall, B., Geiger, A., Barron, J., Hedman, P.: Merf: Memory-efficient radiance fields for real- time view synthesis in unbounded scenes. ACM Transactions on Graphics (ToG)42(4), 1–12 (2023) 22

  23. [23]

    arXiv preprint arXiv:2409.12954 (2024) 4

    Rong, V., Chen, J., Bahmani, S., Kutulakos, K.N., Lindell, D.B.: Gstex: Per-primitive texturing of 2d gaussian splatting for decoupled appearance and geometry modeling. arXiv preprint arXiv:2409.12954 (2024) 4

  24. [24]

    In: 2025 IEEE/CVF Winter Conference on Appli- cations of Computer Vision (WACV)

    Rong, V., Chen, J., Bahmani, S., Kutulakos, K.N., Lindell, D.B.: Gstex: Per-primitive texturing of 2d gaussian splatting for decoupled appearance and geometry modeling. In: 2025 IEEE/CVF Winter Conference on Appli- cations of Computer Vision (WACV). pp. 3508–3518. IEEE (2025) 2 Hybrid Latents 17

  25. [25]

    arXiv preprint arXiv:2412.01823 (2024) 4

    Song, Y., Lin, H., Lei, J., Liu, L., Daniilidis, K.: Hdgs: Textured 2d gaussian splatting for enhanced scene rendering. arXiv preprint arXiv:2412.01823 (2024) 4

  26. [26]

    Svitov, D., Morerio, P., Agapito, L., Bue, A.D.: Billboard splatting (bb- splat):Learnabletexturedprimitivesfornovelviewsynthesis(2025),https: //arxiv.org/abs/2411.085084

  27. [27]

    In: Proceed- ings of the IEEE/CVF International Conference on Computer Vision

    Svitov, D., Morerio, P., Agapito, L., Del Bue, A.: Billboard splatting (bb- splat): Learnable textured primitives for novel view synthesis. In: Proceed- ings of the IEEE/CVF International Conference on Computer Vision. pp. 25029–25039 (2025) 2

  28. [28]

    In: SIGGRAPH Asia 2024 Conference Papers

    Wang, X., Yi, R., Ma, L.: Adr-gaussian: Accelerating gaussian splatting with adaptive radius. In: SIGGRAPH Asia 2024 Conference Papers. pp. 1–10 (2024) 3

  29. [29]

    Weiss, S., Bradley, D.: Gaussian billboards: Expressive 2d gaussian splatting with textures (2024),https://arxiv.org/abs/2412.127344

  30. [30]

    In: IEEE Transactions on Visualization and Computer Graphics

    Weiss, S., Westermann, R.: Differentiable Direct Volume Rendering. In: IEEE Transactions on Visualization and Computer Graphics. vol. 28, pp. 562–572 (2022).https://doi.org/10.1109/TVCG.2021.3114769, issue: 1 3

  31. [31]

    In: Pro- ceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition

    Xiang, F., Xu, Z., Hasan, M., Hold-Geoffroy, Y., Sunkavalli, K., Su, H.: Neutex: Neural texture mapping for volumetric neural rendering. In: Pro- ceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. pp. 7119–7128 (2021) 4

  32. [32]

    Xu, R., Chen, W., Wang, J., Liu, Y., Wang, P., Gao, L., Xin, S., Komura, T., Li, X., Wang, W.: Supergaussians: Enhancing gaussian splatting using primitives with spatially varying colors (2024) 4, 9

  33. [33]

    In: Proceedings of the IEEE/CVF International Conference on Computer Vision

    Zhang, X., Chen, A., Xiong, J., Dai, P., Shen, Y., Xu, W.: Neural shell texture splatting: More details and fewer primitives. In: Proceedings of the IEEE/CVF International Conference on Computer Vision. pp. 25229–25238 (2025) 2, 4, 9, 12

  34. [34]

    optimizing- sparsifying

    Zhang, Y., Jia, W., Niu, W., Yin, M.: Gaussianspa: An" optimizing- sparsifying" simplification framework for compact and high-quality 3d gaus- sian splatting. In: Proceedings of the Computer Vision and Pattern Recog- nition Conference. pp. 26673–26682 (2025) 18

  35. [35]

    IEEE Transactions on Visualization and Computer Graphics8(3), 223–238 (2002) 4 18 N

    Zwicker, M., Pfister, H., Van Baar, J., Gross, M.: Ewa splatting. IEEE Transactions on Visualization and Computer Graphics8(3), 223–238 (2002) 4 18 N. Kelkar et al. 8 Supplementary Materials 8.1 Sparsification methods There have been recent works on sparsifying Gaussian reconstructions, such as Mini-Splatting[6] and GaussianSpa[34], providing significantl...