pith. machine review for the scientific record. sign in

arxiv: 2604.27437 · v1 · submitted 2026-04-30 · 💻 cs.CV

Recognition: unknown

Softmax-GS: Generalized Gaussians Learning When to Blend or Bound

Authors on Pith no claims yet

Pith reviewed 2026-05-07 09:59 UTC · model grok-4.3

classification 💻 cs.CV
keywords 3D Gaussian SplattingNovel View SynthesisSoftmax CompetitionOverlap HandlingView ConsistencySharp Boundaries3D Reconstruction
0
0 comments X

The pith

Softmax-GS lets 3D Gaussians learn when to blend colors or form sharp boundaries in overlaps while preserving rendering consistency.

A machine-rendered reading of the paper's core claim, the machinery that carries it, and where it could break.

Standard 3D Gaussian Splatting assumes Gaussians never overlap in space, which creates artifacts and view inconsistencies when they do, and their diffuse spread makes sharp object edges hard to reconstruct. Softmax-GS replaces that assumption with a softmax-based competition applied directly in any overlap region between a pair of Gaussians. Learnable parameters tune the strength of the competition so the same model can produce smooth color mixing at one extreme and crisp, decisive boundaries at the other. The formulation is built to keep the result identical regardless of the order in which the two Gaussians are considered and to hold transmittance constant no matter how much they overlap. If these properties hold, novel-view synthesis gains both higher visual quality and greater parameter efficiency on real scenes.

Core claim

Softmax-GS enforces a softmax-based competition in overlapping regions between two Gaussians. Learnable parameters control the strength of this competition, enabling a continuous spectrum from smooth color blending to crisp, well-defined boundaries. The formulation preserves order invariance for any two overlapping Gaussians and keeps the output transmittance unchanged irrespective of the extent of overlapping, preventing undesirable discontinuities in the rendered output.

What carries the argument

Softmax-based competition between pairs of overlapping Gaussians, modulated by learnable parameters that set competition strength.

If this is right

  • The method produces a continuous spectrum from smooth blending to crisp boundaries under a single formulation.
  • Order invariance and constant transmittance are guaranteed for any pair of overlapping Gaussians.
  • View inconsistencies and overlap artifacts are removed while sharp object edges become recoverable.
  • Real-world benchmarks show improved reconstruction quality together with higher parameter efficiency.

Where Pith is reading between the lines

These are editorial extensions of the paper, not claims the author makes directly.

  • The same competition principle could be tested on other splatting primitives such as points or surfels.
  • Dynamic scenes or very large environments offer a direct test of whether the learned parameters remain stable across frames.
  • The tunable spectrum may let downstream applications choose blending strength per region without changing the underlying representation.

Load-bearing premise

That the learnable competition parameters can be optimized to control blending versus bounding without introducing new artifacts, view inconsistencies, or excessive training cost, and that the mechanism generalizes beyond the tested simple geometries and real-world benchmarks.

What would settle it

A controlled test scene with two known overlapping Gaussians in which reordering the Gaussians or changing their overlap extent alters the final rendered color or transmittance would falsify the claimed invariance properties.

Figures

Figures reproduced from arXiv: 2604.27437 by Chen Ziwen, Hao Tan, Li Fuxin, Peng Wang, Zexiang Xu.

Figure 1
Figure 1. Figure 1: Comparison between different versions of 3D GS and view at source ↗
Figure 2
Figure 2. Figure 2: Softmax-GS provides flexible boundary sharpness control and introduces a viewpoint-consistent, softmax-based color-merging view at source ↗
Figure 3
Figure 3. Figure 3: Visualization of softmax competition between two identical Gaussians ( view at source ↗
Figure 5
Figure 5. Figure 5: Simple geometry fitting with 4 Gaussians using 3D view at source ↗
Figure 6
Figure 6. Figure 6: Fitting experiment with 3D GS and Softmax-GS on view at source ↗
Figure 7
Figure 7. Figure 7: Qualitative comparison with 3D GS, GES and StopThePop on real world datasets. view at source ↗
Figure 8
Figure 8. Figure 8: Pixel-wise depth rendering of synthetic pattern. Color Depth 3D GS Softmax-GS 3D GS Softmax-GS view at source ↗
Figure 9
Figure 9. Figure 9: Depth rendering comparison with 3D GS. Top view GEF-only Softmax-GS StopThePop view at source ↗
read the original abstract

3D Gaussian Splatting (3D GS) is widely adopted for novel view synthesis due to its high training and rendering efficiency. However, its efficiency relies on the key assumption that Gaussians do not overlap in the 3D space, which leads to noticeable artifacts and view inconsistencies. In addition, the inherently diffuse boundaries of Gaussians hinder accurate reconstruction of sharp object edges. We propose Softmax-GS, a unified solution that addresses both the view-inconsistency and the diffuse-boundary problem by enforcing a softmax-based competition in overlapping regions between two Gaussians. With learnable parameters controlling the strength of the competition, it enables a continuous spectrum from smooth color blending to crisp, well-defined boundaries. Our formulation explicitly preserves order invariance for any two overlapping Gaussians and ensures that the output transmittance remains unchanged irrespective of the extent of overlapping, preventing undesirable discontinuities in the rendered output. Ablation experiments on simple geometries demonstrate the effectiveness of each component of Softmax-GS, and evaluations on real-world benchmarks show that it achieves state-of-the-art performance, improving both reconstruction quality and parameter efficiency.

Editorial analysis

A structured set of objections, weighed in public.

Desk editor's note, referee report, simulated authors' rebuttal, and a circularity audit. Tearing a paper down is the easy half of reading it; the pith above is the substance, this is the friction.

Referee Report

2 major / 2 minor

Summary. The manuscript proposes Softmax-GS as an extension to 3D Gaussian Splatting. It introduces a softmax-based competition rule between pairs of overlapping Gaussians, modulated by learnable parameters that control competition strength. This is claimed to enable a continuous transition from smooth color blending to crisp boundaries, while explicitly preserving order invariance and leaving output transmittance unchanged regardless of overlap extent. The approach is positioned as addressing view inconsistencies and diffuse edges in standard 3DGS, with ablations on simple geometries and SOTA results on real-world benchmarks.

Significance. If the invariance properties and generalization hold under multi-Gaussian overlaps, the method supplies a flexible, learnable mechanism for managing overlaps that could improve both quality and efficiency in novel-view synthesis pipelines. The explicit parameterization of competition strength is a constructive contribution, as it avoids hard-coded thresholds while aiming to maintain rendering consistency.

major comments (2)
  1. [Abstract and core formulation] Abstract and the core formulation (likely §3): the central claim states that the formulation 'explicitly preserves order invariance for any two overlapping Gaussians and ensures that the output transmittance remains unchanged irrespective of the extent of overlapping.' This is formulated and presumably proven only for pairs, yet 3DGS rendering depth-sorts and sequentially blends an arbitrary number of Gaussians per pixel. No reduction rule or composition argument is supplied for n>2 overlaps, raising the risk that composite alpha and transmittance become grouping- or order-dependent, undermining the no-discontinuity guarantee.
  2. [Ablation and evaluation sections] Ablation and evaluation sections: the abstract asserts 'state-of-the-art performance' and 'effectiveness of each component' but supplies no quantitative metrics, tables, or error bars. Even if the full manuscript contains benchmark numbers, the absence of reported PSNR/SSIM deltas, parameter counts, and statistical significance for the learnable-parameter ablations weakens the load-bearing claim that the softmax mechanism improves both quality and efficiency.
minor comments (2)
  1. [Method] Notation for the learnable competition parameters should be introduced with explicit ranges and initialization details to allow reproduction.
  2. [Implementation details] The manuscript would benefit from a short pseudocode block showing how the pairwise softmax is inserted into the existing 3DGS alpha-blending loop.

Simulated Author's Rebuttal

2 responses · 0 unresolved

We thank the referee for the constructive comments and the opportunity to clarify the formulation and evaluation of Softmax-GS. We address each major point below and indicate the revisions we will incorporate.

read point-by-point responses
  1. Referee: [Abstract and core formulation] Abstract and the core formulation (likely §3): the central claim states that the formulation 'explicitly preserves order invariance for any two overlapping Gaussians and ensures that the output transmittance remains unchanged irrespective of the extent of overlapping.' This is formulated and presumably proven only for pairs, yet 3DGS rendering depth-sorts and sequentially blends an arbitrary number of Gaussians per pixel. No reduction rule or composition argument is supplied for n>2 overlaps, raising the risk that composite alpha and transmittance become grouping- or order-dependent, undermining the no-discontinuity guarantee.

    Authors: We appreciate the referee's careful reading of the invariance claim. The derivation in §3 establishes order invariance and transmittance conservation explicitly for any pair of overlapping Gaussians via the softmax competition, which redistributes alpha contributions while preserving their sum. Because 3DGS rendering applies alpha blending sequentially after fixed depth sorting, and our pairwise operation does not alter the total transmittance or introduce new ordering dependencies, the property composes across multiple Gaussians. We will add an explicit composition argument (or inductive step) for n>2 overlaps to §3 in the revised manuscript to make this rigorous and eliminate any ambiguity. revision: yes

  2. Referee: [Ablation and evaluation sections] Ablation and evaluation sections: the abstract asserts 'state-of-the-art performance' and 'effectiveness of each component' but supplies no quantitative metrics, tables, or error bars. Even if the full manuscript contains benchmark numbers, the absence of reported PSNR/SSIM deltas, parameter counts, and statistical significance for the learnable-parameter ablations weakens the load-bearing claim that the softmax mechanism improves both quality and efficiency.

    Authors: The full manuscript contains detailed quantitative results, including PSNR/SSIM/LPIPS tables, parameter-efficiency comparisons, and ablation studies with error bars on both synthetic geometries and real-world benchmarks (Mip-NeRF 360, Tanks & Temples). These support the SOTA claims and component effectiveness. To address the referee's concern about the abstract, we will insert concise quantitative highlights (e.g., average PSNR gain and parameter reduction) into the abstract in the revision. revision: yes

Circularity Check

0 steps flagged

No circularity: new softmax competition rule with independent learnable parameters

full rationale

The paper introduces an explicit new competition mechanism via pairwise softmax with learnable strength parameters, presented as a direct formulation that enforces order invariance and transmittance preservation for overlapping Gaussians. No derivation step reduces a claimed result to a fitted input by construction, no self-citation chain bears the central load, and the invariance statements are derived from the proposed equations rather than renamed prior results or ansatzes imported from the authors' own work. The approach is self-contained against external benchmarks with new components.

Axiom & Free-Parameter Ledger

1 free parameters · 1 axioms · 0 invented entities

The central claim rests on a new softmax competition rule whose invariance properties are taken from standard math and on learnable parameters that are fitted during training.

free parameters (1)
  • learnable parameters controlling competition strength
    These parameters are optimized to set the degree of blending versus sharp boundaries and are central to the continuous spectrum claim.
axioms (1)
  • standard math Softmax operation on overlapping Gaussians preserves order invariance and leaves transmittance unchanged regardless of overlap extent
    Invoked to guarantee no discontinuities in the rendered output.

pith-pipeline@v0.9.0 · 5499 in / 1267 out tokens · 64248 ms · 2026-05-07T09:59:22.572721+00:00 · methodology

discussion (0)

Sign in with ORCID, Apple, or X to comment. Anyone can read and Pith papers without signing in.

Reference graph

Works this paper leans on

39 extracted references · 5 canonical work pages

  1. [1]

    Mip-nerf 360: Unbounded anti-aliased neural radiance fields

    Jonathan T Barron, Ben Mildenhall, Dor Verbin, Pratul P Srinivasan, and Peter Hedman. Mip-nerf 360: Unbounded anti-aliased neural radiance fields. InProceedings of the IEEE/CVF conference on computer vision and pattern recog- nition, pages 5470–5479, 2022. 2, 7

  2. [2]

    A naturalistic open source movie for optical flow evaluation

    Daniel J Butler, Jonas Wulff, Garrett B Stanley, and Michael J Black. A naturalistic open source movie for optical flow evaluation. InEuropean conference on computer vision, pages 611–625. Springer, 2012. 1

  3. [3]

    Lightweight predictive 3d gaussian splats.arXiv preprint arXiv:2406.19434, 2024

    Junli Cao, Vidit Goel, Chaoyang Wang, Anil Kag, Ju Hu, Sergei Korolev, Chenfanfu Jiang, Sergey Tulyakov, and Jian Ren. Lightweight predictive 3d gaussian splats.arXiv preprint arXiv:2406.19434, 2024. 1

  4. [4]

    Lightgaussian: Unbounded 3d gaus- sian compression with 15x reduction and 200+ fps.Advances in neural information processing systems, 37:140138–140158,

    Zhiwen Fan, Kevin Wang, Kairun Wen, Zehao Zhu, Dejia Xu, Zhangyang Wang, et al. Lightgaussian: Unbounded 3d gaus- sian compression with 15x reduction and 200+ fps.Advances in neural information processing systems, 37:140138–140158,

  5. [5]

    Efficient perspective- correct 3d gaussian splatting using hybrid transparency

    Florian Hahlbohm, Fabian Friederichs, Tim Weyrich, Linus Franke, Moritz Kappel, Susana Castillo, Marc Stamminger, Martin Eisemann, and Marcus Magnor. Efficient perspective- correct 3d gaussian splatting using hybrid transparency. In Computer Graphics Forum, page e70014. Wiley Online Li- brary, 2025. 2

  6. [6]

    Ges: Generalized exponential splatting for efficient radiance field rendering

    Abdullah Hamdi, Luke Melas-Kyriazi, Jinjie Mai, Guocheng Qian, Ruoshi Liu, Carl V ondrick, Bernard Ghanem, and Andrea Vedaldi. Ges: Generalized exponential splatting for efficient radiance field rendering. InProceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages 19812–19822, 2024. 1, 2, 5, 7, 8

  7. [7]

    Speedy-splat: Fast 3d gaus- sian splatting with sparse pixels and sparse primitives

    Alex Hanson, Allen Tu, Geng Lin, Vasu Singla, Matthias Zwicker, and Tom Goldstein. Speedy-splat: Fast 3d gaus- sian splatting with sparse pixels and sparse primitives. In Proceedings of the Computer Vision and Pattern Recognition Conference, pages 21537–21546, 2025. 1

  8. [8]

    Pup 3d-gs: Principled uncertainty pruning for 3d gaussian splatting

    Alex Hanson, Allen Tu, Vasu Singla, Mayuka Jayawardhana, Matthias Zwicker, and Tom Goldstein. Pup 3d-gs: Principled uncertainty pruning for 3d gaussian splatting. InProceedings of the Computer Vision and Pattern Recognition Conference, pages 5949–5958, 2025. 1

  9. [9]

    Deep blending for free-viewpoint image-based rendering.ACM Transactions on Graphics (ToG), 37(6):1–15, 2018

    Peter Hedman, Julien Philip, True Price, Jan-Michael Frahm, George Drettakis, and Gabriel Brostow. Deep blending for free-viewpoint image-based rendering.ACM Transactions on Graphics (ToG), 37(6):1–15, 2018. 2, 7

  10. [10]

    3d convex splatting: Radiance field rendering with 3d smooth convexes

    Jan Held, Renaud Vandeghen, Abdullah Hamdi, Adrien Deliege, Anthony Cioppa, Silvio Giancola, Andrea Vedaldi, Bernard Ghanem, and Marc Van Droogenbroeck. 3d convex splatting: Radiance field rendering with 3d smooth convexes. InProceedings of the Computer Vision and Pattern Recogni- tion Conference, pages 21360–21369, 2025. 1, 2, 7, 8

  11. [11]

    Sort-free gaussian splatting via weighted sum rendering

    Qiqi Hou, Randall Rauwendaal, Zifeng Li, Hoang Le, Farzad Farhadzadeh, Fatih Porikli, Alexei Bourd, and Amir Said. Sort-free gaussian splatting via weighted sum rendering. arXiv preprint arXiv:2410.18931, 2024. 2, 3, 7

  12. [12]

    Deformable radial kernel splatting

    Yi-Hua Huang, Ming-Xian Lin, Yang-Tian Sun, Ziyi Yang, Xiaoyang Lyu, Yan-Pei Cao, and Xiaojuan Qi. Deformable radial kernel splatting. InProceedings of the Computer Vision and Pattern Recognition Conference, pages 21513–21523,

  13. [13]

    3d gaussian splatting for real-time radiance field rendering.ACM Trans

    Bernhard Kerbl, Georgios Kopanas, Thomas Leimk¨uhler, and George Drettakis. 3d gaussian splatting for real-time radiance field rendering.ACM Trans. Graph., 42(4):139–1, 2023. 1, 7

  14. [14]

    3d gaussian splatting as markov chain monte carlo.Advances in Neural Information Processing Systems, 37:80965–80986, 2024

    Shakiba Kheradmand, Daniel Rebain, Gopal Sharma, Weiwei Sun, Yang-Che Tseng, Hossam Isack, Abhishek Kar, Andrea Tagliasacchi, and Kwang Moo Yi. 3d gaussian splatting as markov chain monte carlo.Advances in Neural Information Processing Systems, 37:80965–80986, 2024. 1, 7

  15. [15]

    Stochasticsplats: Stochastic rasterization for sorting-free 3d gaussian splatting.arXiv preprint arXiv:2503.24366, 2025

    Shakiba Kheradmand, Delio Vicini, George Kopanas, Dmitry Lagun, Kwang Moo Yi, Mark Matthews, and Andrea Tagliasacchi. Stochasticsplats: Stochastic rasterization for sorting-free 3d gaussian splatting.arXiv preprint arXiv:2503.24366, 2025. 2, 3

  16. [16]

    Tanks and temples: Benchmarking large-scale scene reconstruction.ACM Transactions on Graphics (ToG), 36(4): 1–13, 2017

    Arno Knapitsch, Jaesik Park, Qian-Yi Zhou, and Vladlen Koltun. Tanks and temples: Benchmarking large-scale scene reconstruction.ACM Transactions on Graphics (ToG), 36(4): 1–13, 2017. 2, 7

  17. [17]

    Compact 3d gaussian representation for radiance field

    Joo Chan Lee, Daniel Rho, Xiangyu Sun, Jong Hwan Ko, and Eunbyung Park. Compact 3d gaussian representation for radiance field. InProceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages 21719– 21728, 2024. 1

  18. [18]

    3d-hgs: 3d half-gaussian splatting

    Haolin Li, Jinyang Liu, Mario Sznaier, and Octavia Camps. 3d-hgs: 3d half-gaussian splatting. InProceedings of the Computer Vision and Pattern Recognition Conference, pages 10996–11005, 2025. 1, 2, 7

  19. [19]

    Vastgaussian: Vast 3d gaussians for large scene reconstruction

    Jiaqi Lin, Zhihao Li, Xiao Tang, Jianzhuang Liu, Shiyong Liu, Jiayue Liu, Yangdi Lu, Xiaofei Wu, Songcen Xu, Youliang Yan, et al. Vastgaussian: Vast 3d gaussians for large scene reconstruction. InProceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages 5166– 5175, 2024. 1

  20. [20]

    Citygaussian: Real-time high-quality large-scale scene rendering with gaussians

    Yang Liu, Chuanchen Luo, Lue Fan, Naiyan Wang, Jun- ran Peng, and Zhaoxiang Zhang. Citygaussian: Real-time high-quality large-scale scene rendering with gaussians. In European Conference on Computer Vision, pages 265–282. Springer, 2025. 1

  21. [21]

    Scaffold-gs: Structured 3d gaussians for view-adaptive rendering

    Tao Lu, Mulin Yu, Linning Xu, Yuanbo Xiangli, Limin Wang, Dahua Lin, and Bo Dai. Scaffold-gs: Structured 3d gaussians for view-adaptive rendering. InProceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages 20654–20664, 2024. 1

  22. [22]

    arXiv preprint arXiv:2410.01804 , year=

    Alexander Mai, Peter Hedman, George Kopanas, Dor Verbin, David Futschik, Qiangeng Xu, Falko Kuester, Jonathan T Barron, and Yinda Zhang. Ever: Exact volumetric ellip- soid rendering for real-time view synthesis.arXiv preprint arXiv:2410.01804, 2024. 2, 3, 7

  23. [23]

    Nerf: Representing scenes as neural radiance fields for view syn- thesis.Communications of the ACM, 65(1):99–106, 2021

    Ben Mildenhall, Pratul P Srinivasan, Matthew Tancik, Jonathan T Barron, Ravi Ramamoorthi, and Ren Ng. Nerf: Representing scenes as neural radiance fields for view syn- thesis.Communications of the ACM, 65(1):99–106, 2021. 1

  24. [24]

    Disc-gs: Discontinuity-aware gaussian splat- ting.Advances in Neural Information Processing Systems, 37:112284–112309, 2024

    Haoxuan Qu, Zhuoling Li, Hossein Rahmani, Yujun Cai, and Jun Liu. Disc-gs: Discontinuity-aware gaussian splat- ting.Advances in Neural Information Processing Systems, 37:112284–112309, 2024. 1, 2

  25. [25]

    Stopthe- pop: Sorted gaussian splatting for view-consistent real-time rendering.ACM Transactions on Graphics (TOG), 43(4): 1–17, 2024

    Lukas Radl, Michael Steiner, Mathias Parger, Alexander Weinrauch, Bernhard Kerbl, and Markus Steinberger. Stopthe- pop: Sorted gaussian splatting for view-consistent real-time rendering.ACM Transactions on Graphics (TOG), 43(4): 1–17, 2024. 1, 2, 7

  26. [26]

    Octree-gs: Towards consistent real-time rendering with lod-structured 3d gaussians.arXiv preprint arXiv:2403.17898, 2024

    Kerui Ren, Lihan Jiang, Tao Lu, Mulin Yu, Linning Xu, Zhangkai Ni, and Bo Dai. Octree-gs: Towards consistent real-time rendering with lod-structured 3d gaussians.arXiv preprint arXiv:2403.17898, 2024. 1

  27. [27]

    Revising densification in gaussian splatting

    Samuel Rota Bul `o, Lorenzo Porzi, and Peter Kontschieder. Revising densification in gaussian splatting. InEuropean Conference on Computer Vision, pages 347–362. Springer,

  28. [28]

    Raft: Recurrent all-pairs field transforms for optical flow

    Zachary Teed and Jia Deng. Raft: Recurrent all-pairs field transforms for optical flow. InEuropean conference on com- puter vision, pages 402–419. Springer, 2020. 1

  29. [29]

    Sags: Structure-aware 3d gaussian splatting

    Evangelos Ververas, Rolandos Alexandros Potamias, Jifei Song, Jiankang Deng, and Stefanos Zafeiriou. Sags: Structure-aware 3d gaussian splatting. InEuropean Con- ference on Computer Vision, pages 221–238. Springer, 2024. 1

  30. [30]

    Absgs: Recovering fine details in 3d gaussian splatting

    Zongxin Ye, Wenyu Li, Sidun Liu, Peng Qiao, and Yong Dou. Absgs: Recovering fine details in 3d gaussian splatting. In Proceedings of the 32nd ACM International Conference on Multimedia, pages 1053–1061, 2024. 1

  31. [31]

    3d stu- dent splatting and scooping

    Jialin Zhu, Jiangbei Yue, Feixiang He, and He Wang. 3d stu- dent splatting and scooping. InProceedings of the Computer Vision and Pattern Recognition Conference, pages 21045– 21054, 2025. 1, 2, 7

  32. [32]

    Ewa volume splatting

    Matthias Zwicker, Hanspeter Pfister, Jeroen Van Baar, and Markus Gross. Ewa volume splatting. InProceedings Visual- ization, 2001. VIS’01., pages 29–538. IEEE, 2001. 3 Softmax-GS: Generalized Gaussians Learning When to Blend or Bound Supplementary Material

  33. [33]

    Optimization is run for 10K steps without opacity reset against the target image using default rendering losses

    More implementation details For simple-geometry fitting experiments, we place a camera at the origin facing the +z direction, and initialize four black Gaussians with identical shapes slightly apart at a depth of 1 unit in front of the camera. Optimization is run for 10K steps without opacity reset against the target image using default rendering losses. ...

  34. [34]

    We first render a video from the reconstructed Gaussians and, for each frame Ii, pair it with the frame seven steps ahead, Ii+7

    Measurement of view consistency We evaluate the view consistency of Softmax-GS following the protocol from StopThePop [25]. We first render a video from the reconstructed Gaussians and, for each frame Ii, pair it with the frame seven steps ahead, Ii+7. We then apply the RAFT optical-flow method [ 28] to warp Ii to Ii+7, producing ˆIi, and compute MSE and ...

  35. [35]

    8 for a synthetic pattern and in Fig

    Depth rendering of Softmax-GS We visualize the depth rendering of Softmax-GS in Fig. 8 for a synthetic pattern and in Fig. 9 for a real-world scene. Note the smooth depth transitions at Gaussian boundaries

  36. [36]

    Non-coplanar intersection We visualize two Gaussians crossing at 30 ◦ in Fig. 10. In contrast to the popping artifacts of 3D GS and the abrupt color changes of STP, Softmax-GS produces smooth, flicker- free transitions at the crossing. Note Softmax-GS effectively merges intersecting Gaussians into a single surface, con- sistent with the physical assumptio...

  37. [37]

    Per-scene results Per-scene comparisons of PSNR and Gaussian counts are presented in Table 5, showing that Softmax-GS achieves higher rendering quality with a similar number of Gaussians across all scenes

  38. [38]

    Full Algorithm We provide the complete forward-pass of the Softmax-GS algorithm in Algorithm 1

  39. [39]

    First, the proposed splatting algorithm is applied only to the first 128 Gaus- sians along each ray in order to maintain linear complexity in the backward pass

    Limitations Softmax-GS has three main limitations. First, the proposed splatting algorithm is applied only to the first 128 Gaus- sians along each ray in order to maintain linear complexity in the backward pass. As a result, coverage is incomplete: on Mip-NeRF360 indoor scenes, Softmax-GS accounts for approximately 85% of pixels across test images, while ...