pith. machine review for the scientific record. sign in

arxiv: 2603.24725 · v2 · submitted 2026-03-25 · 💻 cs.CV · cs.GR

Recognition: 2 theorem links

· Lean Theorem

Confidence-Based Mesh Extraction from 3D Gaussians

Authors on Pith no claims yet

Pith reviewed 2026-05-14 23:55 UTC · model grok-4.3

classification 💻 cs.CV cs.GR
keywords 3D Gaussian Splattingmesh extractionconfidence learningsurface reconstructionview-dependent effectsself-supervised learningunbounded scenes
0
0 comments X

The pith

Learnable per-primitive confidence values added to 3D Gaussian Splatting resolve view-dependent ambiguities to deliver state-of-the-art unbounded mesh extraction while staying efficient.

A machine-rendered reading of the paper's core claim, the machinery that carries it, and where it could break.

The paper establishes that a self-supervised confidence framework can improve mesh extraction from 3D Gaussian Splatting in scenes containing reflections, transparency, and other view-dependent effects. Each Gaussian primitive receives a learnable confidence value that dynamically weights photometric against geometric supervision signals. Additional losses penalize color and normal variance within each primitive, and the D-SSIM appearance loss is decoupled into separate terms. A sympathetic reader would care because prior solutions required multi-view consistency checks or large pre-trained models that reduce the speed advantage of explicit 3DGS representations.

Core claim

We introduce a self-supervised confidence framework to 3DGS in which learnable confidence values dynamically balance photometric and geometric supervision. Extending this formulation, we introduce losses that penalize per-primitive color and normal variance and demonstrate their benefit to surface extraction. We further complement the approach with an improved appearance model obtained by decoupling the individual terms of the D-SSIM loss. Our final method achieves state-of-the-art results for unbounded meshes while remaining highly efficient.

What carries the argument

Learnable per-primitive confidence values that dynamically balance photometric and geometric supervision signals.

If this is right

  • Meshes extracted from 3DGS achieve higher accuracy in unbounded scenes containing view-dependent effects.
  • The overall pipeline retains the computational efficiency of standard 3D Gaussian Splatting.
  • Self-supervised signals alone suffice, removing the need for explicit multi-view consistency or external pre-trained models.
  • Decoupling the D-SSIM loss terms produces a stronger appearance model that aids surface reconstruction.

Where Pith is reading between the lines

These are editorial extensions of the paper, not claims the author makes directly.

  • The confidence balancing idea could transfer to other explicit radiance-field representations that also need to separate geometry from appearance.
  • Real-time robotics or AR pipelines that already use 3DGS could directly output usable meshes without extra post-processing stages.
  • The variance penalties might generalize to other per-primitive attributes such as opacity or scale to further stabilize extraction.

Load-bearing premise

Learnable per-primitive confidence values can reliably resolve view-dependent ambiguities from only self-supervised photometric and geometric signals without multi-view checks or external models.

What would settle it

Extracted meshes that still show large deviations from ground-truth surfaces in regions dominated by strong view-dependent effects after applying the confidence balancing and variance penalties.

Figures

Figures reproduced from arXiv: 2603.24725 by Andreas Kurz, Felix Windisch, Lukas Radl, Markus Steinberger, Michael Steiner, Thomas K\"ohler.

Figure 1
Figure 1. Figure 1: Teaser: We propose a novel, confidence-based method to extract meshes from 3D Gaussians. Each Gaussian is equipped with additional confidence values that bal￾ance photometric and geometric losses in a self-supervised manner. Compared to related work, our final meshes exhibit finer details and fewer artifacts. Abstract. Recently, 3D Gaussian Splatting (3DGS) greatly accelerated mesh extraction from posed im… view at source ↗
Figure 2
Figure 2. Figure 2: Analysis of the Photometric Loss Components [PITH_FULL_IMAGE:figures/full_fig_p006_2.png] view at source ↗
Figure 3
Figure 3. Figure 3: Confidence-Driven Gaussian Splatting: We show rendered images from our trained method, accompanied by the rendered confidence maps Cˆ. The confidence maps effectively isolate reflective surfaces, thin foliage, or rarely observed areas (such as the roof for Barn). Importantly, the low-confidence regions are still well-reconstructed. where β ∈ R+ is a parameter balancing both terms. When Cˆ = 1, the loss red… view at source ↗
Figure 4
Figure 4. Figure 4: Effect of our proposed Variance Losses: Rendering color and normals only for the first-hit Gaussians shows how our variance losses align individual Gaussians better to the true object surface, in both color and orientation. Importantly, we clamp the divisor by 1 to ensure that confident Gaussians are not over-densified (i.e. τ¯grad ≥ τgrad). In practice, we find that these modifications slightly reduce the… view at source ↗
Figure 5
Figure 5. Figure 5: Qualitative Mesh Comparison: Our approach produces more detailed meshes with fewer artifacts than other unbounded methods [18, 52]. Additionally, it achieves higher completeness and finer detail than bounded extraction works [5, 77] [PITH_FULL_IMAGE:figures/full_fig_p012_5.png] view at source ↗
Figure 6
Figure 6. Figure 6: Unbounded Mesh Rendering: Our method reconstructs both fine details and high-quality backgrounds on challenging Mip-NeRF 360 scenes [1]. acts as a highly effective regularizer. Overall, our techniques lead to F1-score improvements of 0.047/0.053 [PITH_FULL_IMAGE:figures/full_fig_p012_6.png] view at source ↗
Figure 7
Figure 7. Figure 7: β-Ablation Study: We eval￾uate surface reconstruction quality for varying choices of β. Our chosen value of 7.5 × 10−2 achieves the best perfor￾mance. Confidence. Finally, we ablate our choice of β, which is the primary hyper￾parameter in our implementation. To this end, we present results for the Tanks & Temples dataset on surface reconstruc￾tion, using β ∈ [0, 0.2], cf . in [PITH_FULL_IMAGE:figures/full… view at source ↗
Figure 8
Figure 8. Figure 8: ScanNet++ dataset [69]: We show example images from our selected scenes, with the number of images per-scene inset. All selected scenes contain challenging reflections (e.g. 5a269ba6fe, fb564c935d), textureless areas, and motion blur. B.2 Implemented Concurrent Works PGSR. To reproduce PGSR [5], we use the latest available public codebase4 . We find that in contrast to other methods [52,72], PGSR includes … view at source ↗
Figure 9
Figure 9. Figure 9: Completeness Comparison: Due to gaps in the ground-truth point clouds, our more complete meshes are penalized in precision, compared to the bounded meshes from PGSR [5] (black points denote low precision). This highlights the fundamental disadvantage unbounded methods face for this benchmark. D.3 Extended Comparison with Unbounded Methods Here, we draw a closer comparison to the current state-of-the-art me… view at source ↗
Figure 10
Figure 10. Figure 10: Novel View Mesh Rendering Comparison with Bounded Methods. [PITH_FULL_IMAGE:figures/full_fig_p028_10.png] view at source ↗
Figure 11
Figure 11. Figure 11: Novel View Mesh Rendering Comparison with Unbounded Meth [PITH_FULL_IMAGE:figures/full_fig_p029_11.png] view at source ↗
Figure 12
Figure 12. Figure 12: Confidence Evaluation: Our learned confidence directly correlates with the error maps, demonstrating that it can detect under-reconstructed regions. Crucially, we find that the improvement in image quality metrics is di￾rectly correlated to these average test-view confidences. In high-confidence in￾door scenes, our formulation strictly improves rendering metrics while naturally allocating more primitives … view at source ↗
Figure 13
Figure 13. Figure 13: Effect of our proposed Variance Losses: We visualize the first-hit, last￾hit and fully alpha-blended colors and normals. As can be seen, our variance losses constrain spurious geometry and lead to smoother normal maps. predicts lower confidence for ambiguous, highly complex backgrounds. This ele￾gantly prunes spurious geometry, leading to a massive reduction in the primitive count (from 4.38M down to 2.58… view at source ↗
Figure 14
Figure 14. Figure 14: Effect of our SSIM-decoupled Appearance Module [PITH_FULL_IMAGE:figures/full_fig_p033_14.png] view at source ↗
read the original abstract

Recently, 3D Gaussian Splatting (3DGS) greatly accelerated mesh extraction from posed images due to its explicit representation and fast software rasterization. While the addition of geometric losses and other priors has improved the accuracy of extracted surfaces, mesh extraction remains difficult in scenes with abundant view-dependent effects. To resolve the resulting ambiguities, prior works rely on multi-view techniques, iterative mesh extraction, or large pre-trained models, sacrificing the inherent efficiency of 3DGS. In this work, we present a simple and efficient alternative by introducing a self-supervised confidence framework to 3DGS: within this framework, learnable confidence values dynamically balance photometric and geometric supervision. Extending our confidence-driven formulation, we introduce losses which penalize per-primitive color and normal variance and demonstrate their benefits to surface extraction. Finally, we complement the above with an improved appearance model, by decoupling the individual terms of the D-SSIM loss. Our final approach delivers state-of-the-art results for unbounded meshes while remaining highly efficient.

Editorial analysis

A structured set of objections, weighed in public.

Desk editor's note, referee report, simulated authors' rebuttal, and a circularity audit. Tearing a paper down is the easy half of reading it; the pith above is the substance, this is the friction.

Referee Report

2 major / 2 minor

Summary. The paper proposes a self-supervised confidence framework for mesh extraction from 3D Gaussian Splatting. Learnable per-primitive confidence scalars dynamically weight photometric and geometric losses; additional terms penalize per-primitive color and normal variance, and the D-SSIM loss is decoupled into separate components. The method claims to resolve view-dependent ambiguities in unbounded scenes, delivering state-of-the-art mesh quality while preserving the efficiency of 3DGS without multi-view consistency checks or external models.

Significance. If the quantitative results hold, the work would be significant for real-time 3D reconstruction pipelines. It offers a lightweight, self-supervised route to high-quality unbounded meshes that avoids the computational overhead of iterative refinement or large pre-trained networks, directly addressing a practical bottleneck in 3DGS-based surface extraction.

major comments (2)
  1. [§3] §3 (confidence framework): The central claim that learnable per-primitive confidence scalars, trained only on photometric and geometric self-supervision, reliably resolve view-dependent ambiguities is load-bearing for both the accuracy and efficiency assertions. The formulation lacks an explicit multi-view consistency regularizer; if the learned weights fail to enforce cross-view coherence, the extracted surfaces will retain the same inconsistencies that prior multi-view methods were designed to avoid. An ablation that isolates the confidence term and reports view-consistency metrics (e.g., normal variance across held-out views) is required.
  2. [§4] §4 (experiments): The SOTA claim for unbounded meshes rests on quantitative tables that are not referenced in the abstract. The manuscript must supply concrete comparisons (Chamfer distance, F-score, normal consistency) against the cited baselines on standard unbounded datasets, together with ablations that isolate the variance penalties and the decoupled D-SSIM terms. Without these numbers the efficiency advantage cannot be weighed against possible accuracy trade-offs.
minor comments (2)
  1. [§3.1] Notation for the per-primitive confidence scalar should be introduced once in §3.1 and used consistently thereafter to avoid confusion with other weighting parameters.
  2. Figure captions would benefit from explicit mention of the dataset and metric shown, especially for qualitative unbounded-scene comparisons.

Simulated Author's Rebuttal

2 responses · 0 unresolved

We thank the referee for the constructive feedback. We address the two major comments below and will revise the manuscript accordingly to strengthen the presentation of our confidence framework and experimental validation.

read point-by-point responses
  1. Referee: [§3] The central claim that learnable per-primitive confidence scalars, trained only on photometric and geometric self-supervision, reliably resolve view-dependent ambiguities is load-bearing... An ablation that isolates the confidence term and reports view-consistency metrics (e.g., normal variance across held-out views) is required.

    Authors: We agree that explicit validation of cross-view coherence is valuable. In the revised manuscript we will add an ablation isolating the learnable confidence scalars and report view-consistency metrics including normal variance across held-out views. Our formulation uses the confidence-weighted photometric and geometric losses to implicitly encourage coherence; the added ablation will quantify this effect without introducing an explicit multi-view regularizer. revision: yes

  2. Referee: [§4] The SOTA claim for unbounded meshes rests on quantitative tables that are not referenced in the abstract. The manuscript must supply concrete comparisons (Chamfer distance, F-score, normal consistency) against the cited baselines on standard unbounded datasets, together with ablations that isolate the variance penalties and the decoupled D-SSIM terms.

    Authors: We accept this point. The revised version will reference the quantitative tables in the abstract and include the requested concrete comparisons (Chamfer distance, F-score, normal consistency) on standard unbounded datasets. We will also expand the experimental section with ablations that isolate the per-primitive variance penalties and the decoupled D-SSIM terms. revision: yes

Circularity Check

0 steps flagged

No circularity: learnable confidence parameters are independent of target mesh outputs

full rationale

The paper introduces learnable per-primitive confidence scalars that are optimized via self-supervised photometric and geometric losses (plus variance penalties and decoupled D-SSIM). No equations are shown that define these confidences in terms of the extracted surfaces they are claimed to improve, nor does any derivation reduce the SOTA unbounded-mesh claim to a fitted quantity by construction. The central mechanism is standard parameter learning from data signals; the derivation chain remains open to external validation through the reported empirical results rather than closing on its own inputs.

Axiom & Free-Parameter Ledger

1 free parameters · 1 axioms · 0 invented entities

The approach rests on the assumption that self-supervised signals suffice to disambiguate view-dependent effects and that variance penalties improve surface quality; no explicit free parameters beyond the learnable confidences are named.

free parameters (1)
  • learnable confidence values
    Per-primitive scalars that dynamically weight photometric versus geometric losses.
axioms (1)
  • domain assumption Self-supervised photometric and geometric signals can resolve view-dependent ambiguities
    Invoked to justify the confidence framework without external multi-view or model-based priors.

pith-pipeline@v0.9.0 · 5484 in / 1130 out tokens · 38878 ms · 2026-05-14T23:55:12.804412+00:00 · methodology

discussion (0)

Sign in with ORCID, Apple, or X to comment. Anyone can read and Pith papers without signing in.

Lean theorems connected to this paper

Citations machine-checked in the Pith Canon. Every link opens the source theorem in the public Lean library.

What do these tags mean?
matches
The paper's claim is directly supported by a theorem in the formal canon.
supports
The theorem supports part of the paper's argument, but the paper may add assumptions or extra steps.
extends
The paper goes beyond the formal theorem; the theorem is a base layer rather than the whole result.
uses
The paper appears to rely on the theorem as machinery.
contradicts
The paper's claim conflicts with a theorem or certificate in the canon.
unclear
Pith found a possible connection, but the passage is too broad, indirect, or ambiguous to say the theorem truly supports the claim.

Reference graph

Works this paper leans on

78 extracted references · 78 canonical work pages · 1 internal anchor

  1. [1]

    In: CVPR (2022) 9, 11, 12, 25, 30, 31, 33

    Barron, J.T., Mildenhall, B., Verbin, D., Srinivasan, P.P., Hedman, P.: Mip-NeRF 360: Unbounded Anti-Aliased Neural Radiance Fields. In: CVPR (2022) 9, 11, 12, 25, 30, 31, 33

  2. [2]

    In: ICCV (2023) 3

    Barron, J.T., Mildenhall, B., Verbin, D., Srinivasan, P.P., Hedman, P.: Zip-NeRF: Anti-Aliased Grid-Based Neural Radiance Fields. In: ICCV (2023) 3

  3. [3]

    In: ICML (2017) 4

    Bojanowski, P., Joulin, A., Lopez-Paz, D., Szlam, A.: Optimizing the Latent Space of Generative Networks. In: ICML (2017) 4

  4. [4]

    In: ECCV (2022) 34

    Chen, A., Xu, Z., Geiger, A., Yu, J., Su, H.: TensoRF: Tensorial Radiance Fields. In: ECCV (2022) 34

  5. [5]

    IEEE TVCG31(09) (2025) 2, 4, 6, 9, 10, 11, 12, 13, 14, 21, 22, 23, 25, 26, 27, 28, 30

    Chen, D., Li, H., Ye, W., Wang, Y., Xie, W., Zhai, S., Wang, N., Liu, H., Bao, H., Zhang, G.: PGSR: Planar-Based Gaussian Splatting for Efficient and High-Fidelity Surface Reconstruction . IEEE TVCG31(09) (2025) 2, 4, 6, 9, 10, 11, 12, 13, 14, 21, 22, 23, 25, 26, 27, 28, 30

  6. [6]

    In: NeurIPS (2024) 2, 3, 4, 9, 14

    Chen, H., Wei, F., Li, C., Huang, T., Wang, Y., Lee, G.H.: VCR-GauS: View Consistent Depth-Normal Regularizer for Gaussian Surface Reconstruction. In: NeurIPS (2024) 2, 3, 4, 9, 14

  7. [7]

    ACM TOG43(6) (2024) 4

    Chen, H., Miller, B., Gkioulekas, I.: 3D Reconstruction with Fast Dipole Sums. ACM TOG43(6) (2024) 4

  8. [8]

    In: ECCV (2024) 4

    Dahmani, H., Bennehar, M., Piasco, N., Roldao, L., Tsishkou, D.: SWAG: Splatting in the Wild images with Appearance-conditioned Gaussians. In: ECCV (2024) 4

  9. [9]

    In: SIGGRAPH Asia (2024) 4

    Dai, P., Xu, J., Xie, W., Liu, X., Wang, H., Xu, W.: High-quality Surface Recon- struction using Gaussian Surfels. In: SIGGRAPH Asia (2024) 4

  10. [10]

    Deutsch, I., Moënne-Loccoz, N., State, G., Gojcic, Z.: PPISP: Physically-Plausible Compensation and Control of Photometric Variations in Radiance Field Recon- struction (2026),https://arxiv.org/abs/2601.183364, 13, 24

  11. [11]

    Di Sario, F., Rebain, D., Verbin, D., Grangetto, M., Tagliasacchi, A.: Spherical Voronoi: Directional Appearance as a Differentiable Partition of the Sphere (2025), https://arxiv.org/abs/2512.141802, 3

  12. [12]

    arXiv preprint arXiv:2503.14665 (2025) 8

    Ewen, P., Chen, H., Isaacson, S., Wilson, J., Skinner, K.A., Vasudevan, R.: These Magic Moments: Differentiable Uncertainty Quantification of Radiance Field Mod- els. arXiv preprint arXiv:2503.14665 (2025) 8

  13. [13]

    In: ECCV (2024) 3

    Fang, G., Wang, B.: Mini-Splatting: Representing Scenes with a Constrained Num- ber of Gaussians. In: ECCV (2024) 3

  14. [14]

    In: ICCV (2025) 14

    Fischer, T., Bulò, S.R., Yang, Y.H., Keetha, N., Porzi, L., Müller, N., Schwarz, K., Luiten, J., Pollefeys, M., Kontschieder, P.: FlowR: Flowing from Sparse to Dense 3D Reconstructions. In: ICCV (2025) 14

  15. [15]

    IEEE TPAMI32(12), 2276–2288 (2010) 19

    Goldman, D.B.: Vignette and Exposure Calibration and Compensation. IEEE TPAMI32(12), 2276–2288 (2010) 19

  16. [16]

    In: CVPR (2024) 3

    Goli, L., Reading, C., Sellán, S., Jacobson, A., Tagliasacchi, A.: Bayes’ Rays: Un- certainty Quantification in Neural Radiance Fields. In: CVPR (2024) 3

  17. [17]

    In: ICCV (2025) 3 16 L

    Govindarajan, S., Rebain, D., Yi, K.M., Tagliasacchi, A.: Radiant Foam: Real- Time Differentiable Ray Tracing. In: ICCV (2025) 3 16 L. Radl et al

  18. [18]

    ACM TOG44(6) (2025) 2, 4, 5, 9, 11, 12, 21, 22, 27, 29, 30, 31, 33, 34

    Guédon, A., Gomez, D., Maruani, N., Gong, B., Drettakis, G., Ovsjanikov, M.: MILo: Mesh-In-the-Loop Gaussian Splatting for Detailed and Efficient Surface Reconstruction. ACM TOG44(6) (2025) 2, 4, 5, 9, 11, 12, 21, 22, 27, 29, 30, 31, 33, 34

  19. [19]

    In: CVPR (2024) 2, 4

    Guédon, A., Lepetit, V.: SuGaR: Surface-Aligned Gaussian Splatting for Efficient 3D Mesh Reconstruction and High-Quality Mesh Rendering. In: CVPR (2024) 2, 4

  20. [20]

    Hahlbohm, F., Friederichs, F., Weyrich, T., Franke, L., Kappel, M., Castillo, S., Stamminger, M., Eisemann, M., Magnor, M.: Efficient Perspective-Correct 3D Gaussian Splatting Using Hybrid Transparency. Comput. Graph. Forum44(2) (2025) 3, 5

  21. [21]

    In: SIGGRAPH (2024) 2, 4, 8, 27

    Huang, B., Yu, Z., Chen, A., Geiger, A., Gao, S.: 2D Gaussian Splatting for Geo- metrically Accurate Radiance Fields. In: SIGGRAPH (2024) 2, 4, 8, 27

  22. [22]

    In: ICCV (2025) 8

    Jena, S., Ouasfi, A., Younes, M., Boukhayma, A.: Sparfels: Fast Reconstruction from Sparse Unposed Imagery. In: ICCV (2025) 8

  23. [23]

    In: CVPR (2014) 9, 25, 30

    Jensen, R., Dahl, A., Vogiatzis, G., Tola, E., Aanæs, H.: Large Scale Multi-View Stereopsis Evaluation. In: CVPR (2014) 9, 25, 30

  24. [24]

    In: CVPR (2025) 4

    Jiang, K., Sivaram, V., Peng, C., Ramamoorthi, R.: Geometry Field Splatting with Gaussian Surfels. In: CVPR (2025) 4

  25. [25]

    In: ECCV (2024) 3

    Jiang, W., Lei, B., Daniilidis, K.: FisherRF: Active View Selection and Uncertainty Quantification for Radiance Fields using Fisher Information. In: ECCV (2024) 3

  26. [26]

    IEEE Robotics and Automation Letters (2025) 3

    Jin, L., Zhong, X., Pan, Y., Behley, J., Stachniss, C., Popović, M.: Activegs: Ac- tive scene reconstruction using gaussian splatting. IEEE Robotics and Automation Letters (2025) 3

  27. [27]

    Kendall, A., Gal, Y.: What Uncertainties Do We Need in Bayesian Deep Learning for Computer Vision? In: NeurIPS (2017) 2

  28. [28]

    ACM TOG42(4) (2023) 2, 3, 4, 13, 33

    Kerbl, B., Kopanas, G., Leimkühler, T., Drettakis, G.: 3D Gaussian Splatting for Real-Time Radiance Field Rendering. ACM TOG42(4) (2023) 2, 3, 4, 13, 33

  29. [29]

    ACM TOG43(4) (2024) 4, 13, 23

    Kerbl, B., Meuleman, A., Kopanas, G., Wimmer, M., Lanvin, A., Drettakis, G.: A Hierarchical 3D Gaussian Representation for Real-Time Rendering of Very Large Datasets. ACM TOG43(4) (2024) 4, 13, 23

  30. [30]

    In: NeurIPS (2024) 3

    Kheradmand, S., Rebain, D., Sharma, G., Sun, W., Tseng, Y.C., Isack, H., Kar, A., Tagliasacchi, A., Yi, K.M.: 3D Gaussian Splatting as Markov Chain Monte Carlo. In: NeurIPS (2024) 3

  31. [31]

    ACM TOG36(4) (2017) 9, 10, 11, 13, 20, 21, 26

    Knapitsch,A.,Park,J.,Zhou,Q.Y.,Koltun,V.:TanksandTemples:Benchmarking Large-Scale Scene Reconstruction. ACM TOG36(4) (2017) 9, 10, 11, 13, 20, 21, 26

  32. [32]

    In: NeurIPS (2024) 4

    Kulhanek, J., Peng, S., Kukelova, Z., Pollefeys, M., Sattler, T.: WildGaussians: 3D Gaussian Splatting in the Wild. In: NeurIPS (2024) 4

  33. [33]

    In: NeurIPS (2025) 2, 4, 9, 14, 25, 26

    Li, J., Zhang, J., Zhang, Y., Bai, X., Zheng, J., Yu, X., Gu, L.: GeoSVR: Taming Sparse Voxels for Geometrically Accurate Surface Reconstruction. In: NeurIPS (2025) 2, 4, 9, 14, 25, 26

  34. [34]

    In: NeurIPS (2025) 2, 4, 9

    Li, Q., Feng, H., Gong, X., Liu, Y.S.: VA-GS: Enhancing the Geometric Repre- sentation of Gaussian Splatting via View Alignment. In: NeurIPS (2025) 2, 4, 9

  35. [35]

    In: NeurIPS (2024) 3

    Li, R., Cheung, Y.m.: Variational Multi-scale Representation for Estimating Un- certainty in 3D Gaussian Splatting. In: NeurIPS (2024) 3

  36. [36]

    In: CVPR (2025) 4 Confidence-Based Mesh Extraction from 3D Gaussians 17

    Li, S., Liu, Y.S., Han, Z.: GaussianUDF: Inferring Unsigned Distance Functions through 3D Gaussian Splatting. In: CVPR (2025) 4 Confidence-Based Mesh Extraction from 3D Gaussians 17

  37. [37]

    In: CVPR (2024) 4, 5, 6, 13, 19

    Lin, J., Li, Z., Tang, X., Liu, J., Liu, S., Liu, J., Lu, Y., Wu, X., Xu, S., Yan, Y., Yang, W.: VastGaussian: Vast 3D Gaussians for Large Scene Reconstruction. In: CVPR (2024) 4, 5, 6, 13, 19

  38. [38]

    In: SIGGRAPH (2025) 3

    Liu, R., Sun, D., Chen, M., Wang, Y., Feng, A.: Deformable Beta Splatting. In: SIGGRAPH (2025) 3

  39. [39]

    In: Eu- rographics Symposium on Rendering (2025) 2, 3

    Liu, S., Wu, J., Wu, W., Chu, L., Liu, X.: Uncertainty-Aware Gaussian Splatting with View-Dependent Regularization for High-Fidelity 3D Reconstruction. In: Eu- rographics Symposium on Rendering (2025) 2, 3

  40. [40]

    In: NeurIPS (2025) 3

    von Lützow, N., Nießner, M.: LinPrim: Linear Primitives for Differentiable Volu- metric Rendering. In: NeurIPS (2025) 3

  41. [41]

    ACM TOG 43(6) (Nov 2024) 4

    Lyu, X., Sun, Y.T., Huang, Y.H., Wu, X., Yang, Z., Chen, Y., Pang, J., Qi, X.: 3DGSR: Implicit Surface Reconstruction with 3D Gaussian Splatting. ACM TOG 43(6) (Nov 2024) 4

  42. [42]

    In: SIG- GRAPH Asia (2024) 3, 19

    Mallick, S., Goel, R., Kerbl, B., Vicente Carrasco, F., Steinberger, M., De La Torre, F.: Taming 3DGS: High-Quality Radiance Fields with Limited Resources. In: SIG- GRAPH Asia (2024) 3, 19

  43. [43]

    In: CVPR (2021) 4

    Martin-Brualla,R.,Radwan,N.,Sajjadi,M.S.,Barron,J.T.,Dosovitskiy,A.,Duck- worth, D.: NeRF in the Wild: Neural Radiance Fields for Unconstrained Photo Collections. In: CVPR (2021) 4

  44. [44]

    In: ECCV (2020) 2, 3

    Mildenhall, B., Srinivasan, P.P., Tancik, M., Barron, J.T., Ramamoorthi, R., Ng, R.: NeRF: Representing Scenes as Neural Radiance Fields for View Synthesis. In: ECCV (2020) 2, 3

  45. [45]

    In: CVPR (2024) 4

    Miller, B., Chen, H., Lai, A., Gkioulekas, I.: Objects as volumes: A stochastic geometry view of opaque solids. In: CVPR (2024) 4

  46. [46]

    ACM TOG43(6) (2024) 3

    Moenne-Loccoz, N., Mirzaei, A., Perel, O., de Lutio, R., Esturo, J.M., State, G., Fidler, S., Sharp, N., Gojcic, Z.: 3D Gaussian Ray Tracing: Fast Tracing of Particle Scenes. ACM TOG43(6) (2024) 3

  47. [47]

    ACM TOG41(4) (2022) 3

    Müller, T., Evans, A., Schied, C., Keller, A.: Instant Neural Graphics Primitives with a Multiresolution Hash Encoding. ACM TOG41(4) (2022) 3

  48. [48]

    In: NeurIPS (2025) 4

    Niemeyer, M., Manhardt, F., Rakotosaona, M.J., Oechsle, M., Tsalicoglou, C., Tateno, K., Barron, J.T., Tombari, F.: Learning Neural Exposure Fields for View Synthesis. In: NeurIPS (2025) 4

  49. [49]

    Nilsson, J., Akenine-Möller, T.: Understanding SSIM (2020),https://arxiv.org/ abs/2006.138465

  50. [50]

    Tech7(1) (2024) 3

    Papantonakis, P., Kopanas, G., Kerbl, B., Lanvin, A., Drettakis, G.: Reducing the MemoryFootprintof3DGaussianSplatting.Proc.ACMComput.Graph.Interact. Tech7(1) (2024) 3

  51. [51]

    ACM TOG43(4) (2024) 3, 25

    Radl, L., Steiner, M., Parger, M., Weinrauch, A., Kerbl, B., Steinberger, M.: StopThePop: Sorted Gaussian Splatting for View-Consistent Real-time Render- ing. ACM TOG43(4) (2024) 3, 25

  52. [52]

    In: SIGGRAPH Asia (2025) 4, 5, 8, 9, 11, 12, 20, 22, 23, 25, 30, 31, 33

    Radl, L., Windisch, F., Deixelberger, T., Hladky, J., Steiner, M., Schmalstieg, D., Steinberger, M.: SOF: Sorted Opacity Fields for Fast Unbounded Surface Recon- struction. In: SIGGRAPH Asia (2025) 4, 5, 8, 9, 11, 12, 20, 22, 23, 25, 30, 31, 33

  53. [53]

    In: ECCV (2024) 3

    RotaBulò,S.,Porzi,L.,Kontschieder,P.:RevisingDensificationinGaussianSplat- ting. In: ECCV (2024) 3

  54. [54]

    ACM TOG41(4) (2022) 4

    Rückert, D., Franke, L., Stamminger, M.: ADOP: Approximate Differentiable One- Pixel Point Rendering. ACM TOG41(4) (2022) 4

  55. [55]

    In: ECCV (2022) 3 18 L

    Shen, J., Agudo, A., Moreno-Noguer, F., Ruiz, A.: Conditional-Flow NeRF: Ac- curate 3D Modelling with Reliable Uncertainty Quantification. In: ECCV (2022) 3 18 L. Radl et al

  56. [56]

    In: 3DV (2021) 3

    Shen, J., Ruiz, A., Agudo, A., Moreno-Noguer, F.: Stochastic Neural Radiance Fields: Quantifying Uncertainty in Implicit 3D Representations. In: 3DV (2021) 3

  57. [57]

    In: ICCV (2025) 3, 5

    Steiner, M., Köhler, T., Radl, L., Windisch, F., Schmalstieg, D., Steinberger, M.: AAA-Gaussians: Anti-Aliased and Artifact-Free 3D Gaussian Rendering. In: ICCV (2025) 3, 5

  58. [58]

    In: CVPR (2025) 3, 4, 25

    Sun, C., Choe, J., Loop, C., Ma, W.C., Wang, Y.C.F.: Sparse Voxels Rasterization: Real-time High-fidelity Radiance Field Rendering. In: CVPR (2025) 3, 4, 25

  59. [59]

    In: ICRA (2022) 3

    Sünderhauf, N., Abou-Chakra, J., Miller, D.: Density-aware NeRF Ensembles: Quantifying Predictive Uncertainty in Neural Radiance Fields. In: ICRA (2022) 3

  60. [60]

    Tu, X., Radl, L., Steiner, M., Steinberger, M., Kerbl, B., de la Torre, F.: VRsplat: Fast and Robust Gaussian Splatting for Virtual Reality. Proc. ACM Comput. Graph. Interact. Tech8(1) (2025) 3

  61. [61]

    In: CVPR (2024) 2, 6

    Wang, S., Leroy, V., Cabon, Y., Chidlovskii, B., Revaud, J.: DUSt3R: Geometric 3D Vision Made Easy. In: CVPR (2024) 2, 6

  62. [62]

    ACM TOG43(4), 1–13 (2024) 4

    Wang, Y., Wang, C., Gong, B., Xue, T.: Bilateral Guided Radiance Field Process- ing. ACM TOG43(4), 1–13 (2024) 4

  63. [63]

    IEEE TIP13(4), 600–612 (2004) 5

    Wang, Z., Bovik, A., Sheikh, H., Simoncelli, E.: Image Quality Assessment: From Error Visibility to Structural Similarity. IEEE TIP13(4), 600–612 (2004) 5

  64. [64]

    In: CVPR (2025) 14

    Wu, J.Z., Zhang, Y., Turki, H., Ren, X., Gao, J., Shou, M.Z., Fidler, S., Gojcic, Z., Ling, H.: Difix3D+: Improving 3D Reconstructions with Single-Step Diffusion Models. In: CVPR (2025) 14

  65. [65]

    In: CVPR (2025) 3

    Wu, Q., Martinez Esturo, J., Mirzaei, A., Moenne-Loccoz, N., Gojcic, Z.: 3DGUT: Enabling Distorted Cameras and Secondary Rays in Gaussian Splatting. In: CVPR (2025) 3

  66. [66]

    In: CVPR (2024) 3

    Xue, S., Dill, J., Mathur, P., Dellaert, F., Tsiotras, P., Xu, D.: Neural Visibility Field for Uncertainty-Driven Active Mapping. In: CVPR (2024) 3

  67. [67]

    In: NeurIPS (2024) 3

    Yang, Z., Gao, X., Sun, Y., Huang, Y., Lyu, X., Zhou, W., Jiao, S., Qi, X., Jin, X.: Spec-Gaussian: Anisotropic View-Dependent Appearance for 3D Gaussian Splat- ting. In: NeurIPS (2024) 3

  68. [68]

    In: ACM MM (2024) 3, 7

    Ye, Z., Li, W., Liu, S., Qiao, P., Dou, Y.: AbsGS: Recovering Fine Details in 3D Gaussian Splatting. In: ACM MM (2024) 3, 7

  69. [69]

    In: ICCV (2023) 9, 10, 11, 21, 22

    Yeshwanth, C., Liu, Y.C., Nießner, M., Dai, A.: ScanNet++: A High-Fidelity Dataset of 3D Indoor Scenes. In: ICCV (2023) 9, 10, 11, 21, 22

  70. [70]

    In: NeurIPS (2024) 4

    Yu, M., Lu, T., Xu, L., Jiang, L., Xiangli, Y., Dai, B.: GSDF: 3DGS Meets SDF for Improved Rendering and Reconstruction. In: NeurIPS (2024) 4

  71. [71]

    In: CVPR (2024) 3

    Yu, Z., Chen, A., Huang, B., Sattler, T., Geiger, A.: Mip-Splatting: Alias-free 3D Gaussian Splatting. In: CVPR (2024) 3

  72. [72]

    ACM TOG43(6) (2024) 2, 4, 5, 6, 7, 8, 9, 21, 22, 23, 27

    Yu, Z., Sattler, T., Geiger, A.: Gaussian Opacity Fields: Efficient Adaptive Surface Reconstruction in Unbounded Scenes. ACM TOG43(6) (2024) 2, 4, 5, 6, 7, 8, 9, 21, 22, 23, 27

  73. [73]

    Zhang, B., Fang, C., Shrestha, R., Liang, Y., Long, X.X., Tan, P.: RaDe-GS: Ras- terizing Depth in Gaussian Splatting (2024),https://arxiv.org/abs/2406.01467 4

  74. [74]

    In: ECCV (2024) 4

    Zhang, D., Wang, C., Wang, W., Li, P., Qin, M., Wang, H.: Gaussian in the Wild: 3D Gaussian Splatting for Unconstrained Image Collections. In: ECCV (2024) 4

  75. [75]

    In: NeurIPS (2024) 4

    Zhang, W., Liu, Y.S., Han, Z.: Neural Signed Distance Function Inference through Splatting 3D Gaussians Pulled on Zero-Level Set. In: NeurIPS (2024) 4

  76. [76]

    In: SIGGRAPH (2025) 8 Confidence-Based Mesh Extraction from 3D Gaussians 19

    Zhang, Z., Roussel, N., Muller, T., Zeltner, T., Nimier-David, M., Rousselle, F., Jakob, W.: Radiance Surfaces: Optimizing Surface Representations with a 5D Ra- diance Field Loss. In: SIGGRAPH (2025) 8 Confidence-Based Mesh Extraction from 3D Gaussians 19

  77. [77]

    In: ICCV (2025) 2, 4, 9, 10, 11, 12, 23, 26, 30, 31

    Zhang, Z., Huang, B., Jiang, H., Zhou, L., Xiang, X., Shen, S.: Quadratic Gaus- sian Splatting: High Quality Surface Reconstruction with Second-order Geometric Primitives. In: ICCV (2025) 2, 4, 9, 10, 11, 12, 23, 26, 30, 31

  78. [78]

    In: ICCV (2025) 3 A Implementation Details To aid reproducibility, this section contains additional implementation details for our method

    Zhao, C., Wang, X., Zhang, T., Javed, S., Salzmann, M.: Self-Ensembling Gaussian Splatting for Few-Shot Novel View Synthesis. In: ICCV (2025) 3 A Implementation Details To aid reproducibility, this section contains additional implementation details for our method. A.1 Decoupled D-SSIM We made the following implementation changes to the decoupled appearanc...