pith. machine review for the scientific record. sign in

arxiv: 2604.01204 · v2 · submitted 2026-04-01 · 💻 cs.CV · cs.AI· cs.GR· cs.LG

Recognition: 2 theorem links

· Lean Theorem

Neural Harmonic Textures for High-Quality Primitive Based Neural Reconstruction

Jorge Condor, Merlin Nimier-David, Nicolas Moenne-Loccoz, Piotr Didyk, Qi Wu, Zan Gojcic

Authors on Pith no claims yet

Pith reviewed 2026-05-13 22:18 UTC · model grok-4.3

classification 💻 cs.CV cs.AIcs.GRcs.LG
keywords neural reconstructionnovel view synthesis3D Gaussian splattingharmonic texturesprimitive-based renderingreal-time renderingdeferred decoding
0
0 comments X

The pith

Neural Harmonic Textures let primitive-based models capture high-frequency details by turning feature interpolation into a harmonic sum decoded in one pass.

A machine-rendered reading of the paper's core claim, the machinery that carries it, and where it could break.

The paper proposes Neural Harmonic Textures to improve the expressivity of primitive-based 3D representations such as Gaussian splatting. Latent feature vectors are placed on a virtual scaffold around each primitive, interpolated along rays, and passed through periodic activations so that the blended signal becomes a sum of harmonic components. A compact neural network then decodes this signal in a single deferred step. The goal is to retain the speed and scalability of primitives while approaching the detail quality of full neural fields for real-time novel-view synthesis.

Core claim

Anchoring latent feature vectors on a virtual scaffold surrounding each primitive, interpolating the features at ray intersection points, and applying periodic activations converts alpha blending into a weighted sum of harmonic components that a small neural network decodes in one deferred pass.

What carries the argument

Neural Harmonic Textures: per-primitive virtual scaffold that holds latent features, periodic activation functions applied after interpolation, and a lightweight deferred decoder that reconstructs the final signal from the resulting harmonics.

If this is right

  • The same representation integrates directly into pipelines such as 3DGUT, Triangle Splatting, and 2DGS.
  • High-frequency surface detail becomes representable without increasing the number or size of primitives.
  • The deferred single-pass decoder keeps inference cost low enough for real-time rendering.
  • The same construction extends to 2D image fitting and semantic reconstruction tasks.

Where Pith is reading between the lines

These are editorial extensions of the paper, not claims the author makes directly.

  • Periodic feature activations could be swapped for other basis functions to target specific frequency bands in future work.
  • The scaffold idea might transfer to non-primitive representations when local high-frequency control is needed.
  • Because the harmonics are computed per primitive, the approach could support efficient editing or animation of individual scene elements.

Load-bearing premise

That placing features on a virtual scaffold around each primitive and applying periodic activations will reliably extract high-frequency content from diverse scenes without artifacts or per-scene retuning.

What would settle it

Apply the method to a test scene containing fine high-frequency patterns such as printed text or thin fabric threads and measure whether visible artifacts appear or quality falls below that of a comparable neural-field baseline at equal frame rate.

Figures

Figures reproduced from arXiv: 2604.01204 by Jorge Condor, Merlin Nimier-David, Nicolas Moenne-Loccoz, Piotr Didyk, Qi Wu, Zan Gojcic.

Figure 1
Figure 1. Figure 1: Neural Harmonic Textures for novel view synthesis. [PITH_FULL_IMAGE:figures/full_fig_p001_1.png] view at source ↗
Figure 2
Figure 2. Figure 2: Neural Harmonic Textures applied to novel-view synthesis. We virtually attach feature vectors fi to the vertices of tetrahedra inscribing the Gaussian primitives3 . Fol￾lowing 3DGUT [65], we evaluate the point along the ray where the projected Gaussian has maximum response. We barycentrically interpolate vertex features at that point, and encode them with sine and cosine functions into different channels. … view at source ↗
Figure 3
Figure 3. Figure 3: Illustrating our method in 2D. Each primitive is bounded by an ellipsoid in world space, which becomes a sphere in whitened canonical space (a). Considering a virtual bounding tetrahedron in this canonical space, we attach one N-dimensional feature vector f j to each vertex. The primitive’s contribution is evaluated at the point of maximum response p ∗ of the projected Gaussian along the intersecting ray (… view at source ↗
Figure 4
Figure 4. Figure 4: Harmonic textures. In this formulation, the interpolation function effectively acts as a frequency modulator: large differences between vertex features within a prim￾itive produce rapidly oscillating, spatially vary￾ing textures. Each primitive additionally has a kernel-weighted opacity, which acts as the har￾monic amplitude. This behavior is illustrated in the inset [PITH_FULL_IMAGE:figures/full_fig_p008… view at source ↗
Figure 5
Figure 5. Figure 5: Our method outperforms 3DGS and 3DGUT at all primitive counts. The improvement is par￾ticularly pronounced in the low￾primitive regime (≤ 100k), with deltas upwards of 2dB of PSNR. state-of-the-art Spherical Voronoi functions [9] (SV) and ours (NHT). We im￾plement all 4 under the same framework (gsplat), under strictly controlled con￾ditions to ensure the only difference between them is the choice of appea… view at source ↗
Figure 6
Figure 6. Figure 6: Comparison between our and previous works on radiance field reconstruction on scenes from MipNeRF360 [1], Tanks and Temples [27]. Our method models high frequency detail and view dependent effects to a higher degree than previous works. the MLP weights, loss scaling, the color-refinement phase, and opacity and scale regularization. Primitive count [PITH_FULL_IMAGE:figures/full_fig_p012_6.png] view at source ↗
Figure 7
Figure 7. Figure 7: Comparison of our work vs Instant NGP on a 100× compression task. Original images are 45.7MP 14-bit HDR RAW files. We achieve substantially superior perceptual quality at equal compression and similar training times. In future work, we would like to investigate the feasibility of extracting au￾tomatic Level of Detail from a finer-grained harmonic decomposition. Given the generality of Neural Harmonic Textu… view at source ↗
Figure 8
Figure 8. Figure 8: LSEG feature PCA visualizations on test views from MipNeRF360. Rows from top to bottom: bicycle, garden, bonsai, kitchen. Each pair shows the ground-truth LSEG features (GT) alongside our rendered features (Ours). Our method faithfully recon￾structs semantic feature maps while preserving sharp boundaries, at higher resolutions, and real-time. PCA visualization. LSEG feature maps are 512-dimensional and the… view at source ↗
Figure 9
Figure 9. Figure 9: Visual comparison at 100× compression. Each cell shows tonemapped PSNR (dB) and LPIPS. Our method (NHT) consistently achieves lower LPIPS (better perceptual quality) than Instant NGP at comparable or higher PSNR. The images are taken from the dataset we curated [PITH_FULL_IMAGE:figures/full_fig_p035_9.png] view at source ↗
read the original abstract

Primitive-based methods such as 3D Gaussian Splatting have recently become the state-of-the-art for novel-view synthesis and related reconstruction tasks. Compared to neural fields, these representations are more flexible, adaptive, and scale better to large scenes. However, the limited expressivity of individual primitives makes modeling high-frequency detail challenging. We introduce Neural Harmonic Textures, a neural representation approach that anchors latent feature vectors on a virtual scaffold surrounding each primitive. These features are interpolated within the primitive at ray intersection points. Inspired by Fourier analysis, we apply periodic activations to the interpolated features, turning alpha blending into a weighted sum of harmonic components. The resulting signal is then decoded in a single deferred pass using a small neural network, significantly reducing computational cost. Neural Harmonic Textures yield state-of-the-art results in real-time novel view synthesis while bridging the gap between primitive- and neural-field-based reconstruction. Our method integrates seamlessly into existing primitive-based pipelines such as 3DGUT, Triangle Splatting, and 2DGS. We further demonstrate its generality with applications to 2D image fitting and semantic reconstruction.

Editorial analysis

A structured set of objections, weighed in public.

Desk editor's note, referee report, simulated authors' rebuttal, and a circularity audit. Tearing a paper down is the easy half of reading it; the pith above is the substance, this is the friction.

Referee Report

2 major / 1 minor

Summary. The paper proposes Neural Harmonic Textures to enhance primitive-based representations (e.g., 3D Gaussian Splatting) for novel view synthesis. Latent feature vectors are anchored on a virtual scaffold around each primitive, interpolated at ray-hit points, and processed with periodic activations to convert alpha blending into a weighted sum of harmonic components; the result is decoded by a small deferred neural network. The work claims state-of-the-art real-time performance, seamless integration into pipelines such as 3DGUT, Triangle Splatting, and 2DGS, and extensions to 2D image fitting and semantic reconstruction.

Significance. If the empirical claims hold, the approach would meaningfully advance the field by increasing the expressivity of efficient, scalable primitive representations toward neural-field quality without incurring high computational overhead, potentially offering a practical bridge between the two paradigms in real-time reconstruction tasks.

major comments (2)
  1. [Abstract] Abstract: the claim that the method 'yield[s] state-of-the-art results in real-time novel view synthesis' is presented without any quantitative metrics, tables, ablation studies, or implementation details, which is load-bearing for the central performance assertion and prevents verification of the bridging claim.
  2. [Method] Method description (periodic activations and scaffold interpolation): no derivation, frequency bounds, or conditioning analysis is supplied for the harmonic basis under typical primitive densities and interpolation, leaving the assumption that the scheme captures high-frequency content without ringing or per-scene tuning unexamined and therefore load-bearing for the expressivity claim.
minor comments (1)
  1. [Abstract] Abstract: the acronym '3DGUT' is used without expansion, which reduces clarity for readers outside the immediate sub-area.

Simulated Author's Rebuttal

2 responses · 0 unresolved

We thank the referee for their detailed and constructive comments on our manuscript. We address each of the major comments below and have prepared revisions to the manuscript to incorporate the suggested improvements.

read point-by-point responses
  1. Referee: [Abstract] Abstract: the claim that the method 'yield[s] state-of-the-art results in real-time novel view synthesis' is presented without any quantitative metrics, tables, ablation studies, or implementation details, which is load-bearing for the central performance assertion and prevents verification of the bridging claim.

    Authors: We agree that the abstract would be strengthened by including quantitative support for the state-of-the-art claim. In the revised manuscript, we will update the abstract to reference specific metrics from our experiments, such as improved PSNR and real-time FPS compared to baselines, while maintaining its conciseness. This will directly tie the claim to the results presented in the paper. revision: yes

  2. Referee: [Method] Method description (periodic activations and scaffold interpolation): no derivation, frequency bounds, or conditioning analysis is supplied for the harmonic basis under typical primitive densities and interpolation, leaving the assumption that the scheme captures high-frequency content without ringing or per-scene tuning unexamined and therefore load-bearing for the expressivity claim.

    Authors: The referee correctly identifies that the current method section lacks a formal derivation and analysis of the harmonic components. We will revise the manuscript to include a mathematical derivation of the periodic activations inspired by Fourier series, specify frequency bounds based on typical primitive densities, and provide a conditioning analysis. Additionally, we will add discussion and experiments addressing potential ringing artifacts and the lack of need for per-scene tuning, thereby substantiating the expressivity claims. revision: yes

Circularity Check

0 steps flagged

No circularity: forward proposal of scaffolded periodic features with no self-referential reduction

full rationale

The provided abstract and description introduce Neural Harmonic Textures by anchoring latent vectors on virtual scaffolds around primitives, interpolating at ray hits, applying periodic activations to produce harmonic sums, and decoding via a small deferred network. No equations, derivations, or uniqueness theorems are shown that reduce the claimed expressivity or SOTA results to fitted parameters, self-citations, or inputs by construction. The method is presented as a design choice integrating into existing pipelines (3DGUT, Triangle Splatting, 2DGS) without load-bearing self-referential steps. This matches the reader's assessment of score 2.0 as a non-circular forward proposal.

Axiom & Free-Parameter Ledger

0 free parameters · 0 axioms · 0 invented entities

Abstract-only review provides no explicit free parameters, axioms, or invented entities; the scaffold and periodic activations are presented as novel but without stated assumptions or counts.

pith-pipeline@v0.9.0 · 5515 in / 1079 out tokens · 33397 ms · 2026-05-13T22:18:32.749013+00:00 · methodology

discussion (0)

Sign in with ORCID, Apple, or X to comment. Anyone can read and Pith papers without signing in.

Lean theorems connected to this paper

Citations machine-checked in the Pith Canon. Every link opens the source theorem in the public Lean library.

Reference graph

Works this paper leans on

76 extracted references · 76 canonical work pages · 2 internal anchors

  1. [1]

    In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR) (2022) 10, 11, 12, 14, 22, 24, 30

    Barron, J.T., Mildenhall, B., Verbin, D., Srinivasan, P.P., Hedman, P.: Mip- nerf 360: Unbounded anti-aliased neural radiance fields. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR) (2022) 10, 11, 12, 14, 22, 24, 30

  2. [2]

    In: Proceedings of the IEEE/CVF International Conference on Computer Vision (ICCV) (2023) 10

    Barron, J.T., Mildenhall, B., Verbin, D., Srinivasan, P.P., Hedman, P.: Zip-NeRF: Anti-aliased grid-based neural radiance fields. In: Proceedings of the IEEE/CVF International Conference on Computer Vision (ICCV) (2023) 10

  3. [3]

    In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR) (2025) 5, 10

    Chao, B., Tseng, H.Y., Porzi, L., Gao, C., Li, T., Li, Q., Saraf, A., Huang, J.B., Kopf, J., Wetzstein, G., Kim, C.: Textured gaussians for enhanced 3d scene ap- pearance modeling. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR) (2025) 5, 10

  4. [4]

    In: European conference on computer vision

    Chen, A., Xu, Z., Geiger, A., Yu, J., Su, H.: Tensorf: Tensorial radiance fields. In: European conference on computer vision. pp. 333–350. Springer (2022) 3

  5. [5]

    In: Proceedings of the IEEE/CVF conference on computer vision and pattern recognition

    Chen, Y., Chen, Z., Zhang, C., Wang, F., Yang, X., Wang, Y., Cai, Z., Yang, L., Liu, H., Lin, G.: Gaussianeditor: Swift and controllable 3d editing with gaussian splatting. In: Proceedings of the IEEE/CVF conference on computer vision and pattern recognition. pp. 21476–21485 (2024) 2

  6. [6]

    In: The Conference on Computer Vision and Pattern Recognition (CVPR) (2023) 4

    Chen, Z., Funkhouser, T., Hedman, P., Tagliasacchi, A.: MobileNeRF: Exploiting the polygon rasterization pipeline for efficient neural field rendering on mobile architectures. In: The Conference on Computer Vision and Pattern Recognition (CVPR) (2023) 4

  7. [7]

    Condor, J., Hermann, N., Yurtsever, M.A., Didyk, P.: Gabor fields: Orientation- selective level-of-detail for volume rendering (2026),https://arxiv.org/abs/ 2602.050814

  8. [8]

    ACM Transactions on Graphics44(1) (2025) 4

    Condor, J., Speierer, S., Bode, L., Bozic, A., Green, S., Didyk, P., Jarabo, A.: Don’t splat your gaussians: Volumetric ray-traced primitives for modeling and rendering scattering and emissive media. ACM Transactions on Graphics44(1) (2025) 4

  9. [9]

    arXiv preprint arXiv:2512.14180 (2025) 2, 10, 11

    Di Sario, F., Rebain, D., Verbin, D., Grangetto, M., Tagliasacchi, A.: Spherical voronoi: Directional appearance as a differentiable partition of the sphere. arXiv preprint arXiv:2512.14180 (2025) 2, 10, 11

  10. [10]

    Duckworth, D., Hedman, P., Reiser, C., Zhizhin, P., Thibert, J.F., Lučić, M., Szeliski, R., Barron, J.T.: Smerf: Streamable memory efficient radiance fields for real-time large-scene exploration (2023) 7

  11. [11]

    In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition

    Fridovich-Keil, S., Meanti, G., Warburg, F.R., Recht, B., Kanazawa, A.: K-planes: Explicit radiance fields in space, time, and appearance. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. pp. 12479– 12488 (2023) 3

  12. [12]

    Computer Graphics Forum44(3), e70134 (2025).https: //doi.org/https://doi.org/10.1111/cgf.70134,https://onlinelibrary

    Gadirov, H., Wu, Q., Bauer, D., Ma, K.L., Roerdink, J.B., Frey, S.: Hyperflint: Hypernetwork-based flow estimation and temporal interpolation for scientific en- semble visualization. Computer Graphics Forum44(3), e70134 (2025).https: //doi.org/https://doi.org/10.1111/cgf.70134,https://onlinelibrary. wiley.com/doi/abs/10.1111/cgf.701343

  13. [13]

    In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR)

    Govindarajan, S., Rebain, D., Yi, K.M., Tagliasacchi, A.: Radiant foam: Real- time differentiable ray tracing. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR). pp. 4135–4145 (2025) 4 Neural Harmonic Textures 17

  14. [14]

    In: European Conference on Computer Vision

    Govindarajan, S., Sambugaro, Z., Shabanov, A., Takikawa, T., Rebain, D., Sun, W., Conci, N., Yi, K.M., Tagliasacchi, A.: Lagrangian hashing for compressed neural field representations. In: European Conference on Computer Vision. pp. 183–199. Springer (2024) 3

  15. [15]

    In: 2025 International Conference on 3D Vision (3DV)

    Hahlbohm, F., Franke, L., Kappel, M., Castillo, S., Eisemann, M., Stamminger, M., Magnor, M.: INPC: Implicit Neural Point Clouds for Radiance Field Render- ing . In: 2025 International Conference on 3D Vision (3DV). pp. 168–178. IEEE Computer Society, Los Alamitos, CA, USA (Mar 2025).https://doi.org/10. 1109/3DV66043.2025.00021,https://doi.ieeecomputersoc...

  16. [16]

    In: Oh, A., Naumann, T., Globerson, A., Saenko, K., Hardt, M., Levine, S

    Han, K., Xiang, W., Yu, L.: Volume feature rendering for fast neural radiance field reconstruction. In: Oh, A., Naumann, T., Globerson, A., Saenko, K., Hardt, M., Levine, S. (eds.) Advances in Neural Information Processing Systems. vol. 36, pp. 65416–65427. Curran Associates, Inc. (2023),https://proceedings.neurips.cc/ paper _ files / paper / 2023 / file ...

  17. [17]

    Morgan Kauf- mann (2010) 6

    Harris, D., Harris, S.L.: Digital design and computer architecture. Morgan Kauf- mann (2010) 6

  18. [18]

    ACM Transactions on Graphics (Proceedings of SIGGRAPH Asia)37(6), 257:1–257:15 (2018) 10, 11, 22, 24, 30

    Hedman, P., Philip, J., Price, T., Frahm, J.M., Drettakis, G., Brostow, G.: Deep blending for free-viewpoint image-based rendering. ACM Transactions on Graphics (Proceedings of SIGGRAPH Asia)37(6), 257:1–257:15 (2018) 10, 11, 22, 24, 30

  19. [19]

    In: Proceedings of the IEEE/CVF International Conference on Computer Vision (ICCV) (2021) 4, 8

    Hedman, P., Srinivasan, P.P., Mildenhall, B., Barron, J.T., Debevec, P.: Bak- ing neural radiance fields for real-time view synthesis. In: Proceedings of the IEEE/CVF International Conference on Computer Vision (ICCV) (2021) 4, 8

  20. [20]

    In: Thirteenth International Conference on 3D Vision (3DV) (2025) 2, 4, 5, 10, 12

    Held, J., Vandeghen, R., Deliege, A., Hamdi, A., Rebain, D., Giancola, S., Cioppa, A., Vedaldi, A., Ghanem, B., Tagliasacchi, A., et al.: Triangle splatting for real- time radiance field rendering. In: Thirteenth International Conference on 3D Vision (3DV) (2025) 2, 4, 5, 10, 12

  21. [21]

    2d gaussian splatting for geometrically accurate radiance fields

    Huang, B., Yu, Z., Chen, A., Geiger, A., Gao, S.: 2D Gaussian splatting for geo- metrically accurate radiance fields. In: ACM SIGGRAPH 2024 Conference Papers (2024).https://doi.org/10.1145/3641519.36574284, 5, 10, 12

  22. [22]

    arXiv preprint arXiv:2407.09733 (2024) 5

    Huang, Z., Gong, M.: Textured-gs: Gaussian splatting with spatially defined color and opacity. arXiv preprint arXiv:2407.09733 (2024) 5

  23. [23]

    Joint Photographic Experts Group: JPEG XL image coding system.https:// jpeg.org/jpegxl/(2024), accessed: 2024-05-24 14, 15, 33

  24. [24]

    ACM Transactions on Graphics (Proceedings of SIGGRAPH)42(4) (2023) 2, 4, 5, 9

    Kerbl, B., Kopanas, G., Leimkühler, T., Drettakis, G.: 3D gaussian splatting for real-time radiance field rendering. ACM Transactions on Graphics (Proceedings of SIGGRAPH)42(4) (2023) 2, 4, 5, 9

  25. [25]

    In: Advances in Neural Information Processing Systems (NeurIPS) (2024) 9, 10

    Kheradmand, S., Rebain, D., Sharma, G., Sun, W., Tseng, Y.C., Isack, H., Kar, A., Tagliasacchi, A., Yi, K.M.: 3D gaussian splatting as markov chain monte carlo. In: Advances in Neural Information Processing Systems (NeurIPS) (2024) 9, 10

  26. [26]

    Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization (2017) 28

  27. [27]

    ACM Transactions on Graphics (Proceedings of SIGGRAPH)36(4) (2017) 10, 11, 12, 22, 24, 30

    Knapitsch, A., Park, J., Zhou, Q.Y., Koltun, V.: Tanks and temples: Benchmarking large-scale scene reconstruction. ACM Transactions on Graphics (Proceedings of SIGGRAPH)36(4) (2017) 10, 11, 12, 22, 24, 30

  28. [28]

    arXiv preprint arXiv:2505.23158 (2025) 4

    Kulhanek, J., Rakotosaona, M.J., Manhardt, F., Tsalicoglou, C., Niemeyer, M., Sattler, T., Peng, S., Tombari, F.: Lodge: Level-of-detail large-scale gaussian splat- ting with efficient rendering. arXiv preprint arXiv:2505.23158 (2025) 4

  29. [29]

    Language-driven semantic segmentation.arXiv preprint arXiv:2201.03546,

    Li, B., Weinberger, K.Q., Belongie, S., Koltun, V., Ranftl, R.: Language-driven semantic segmentation. arXiv preprint arXiv:2201.03546 (2022) 13 18 J. Condor et al

  30. [30]

    In: Proceedings of the IEEE/CVF conference on computer vision and pattern recognition

    Li, Z., Müller, T., Evans, A., Taylor, R.H., Unberath, M., Liu, M.Y., Lin, C.H.: Neuralangelo: High-fidelity neural surface reconstruction. In: Proceedings of the IEEE/CVF conference on computer vision and pattern recognition. pp. 8456–8465 (2023) 3

  31. [31]

    arXiv preprint arXiv:2412.03526 (2024) 2

    Liang, H., Ren, J., Mirzaei, A., Torralba, A., Liu, Z., Gilitschenski, I., Fidler, S., Oztireli, C., Ling, H., Gojcic, Z., Huang, J.: Feed-forward bullet-time reconstruc- tion of dynamic scenes from monocular videos. arXiv preprint arXiv:2412.03526 (2024) 2

  32. [32]

    In: Advances in Neural Information Processing Systems (NeurIPS) (2020) 3

    Liu, L., Gu, J., Lin, K.Z., Chua, T.S., Theobalt, C.: Neural sparse voxel fields. In: Advances in Neural Information Processing Systems (NeurIPS) (2020) 3

  33. [33]

    In: Proceedings of SIGGRAPH Conference Papers (2025) 2, 4, 10

    Liu, R., Sun, D., Chen, M., Wang, Y., Feng, A.: Deformable beta splatting. In: Proceedings of SIGGRAPH Conference Papers (2025) 2, 4, 10

  34. [34]

    ACM Trans- actions on Graphics (Proceedings of SIGGRAPH)38(4) (2019) 3

    Lombardi, S., Simon, T., Saragih, J., Schwartz, G., Lehrmann, A., Sheikh, Y.: Neural volumes: Learning dynamic renderable volumes from images. ACM Trans- actions on Graphics (Proceedings of SIGGRAPH)38(4) (2019) 3

  35. [35]

    ACM Transactions on Graphics (Proceedings of SIGGRAPH)40(4) (2021) 3

    Lombardi, S., Simon, T., Schwartz, G., Zollhoefer, M., Sheikh, Y., Saragih, J.: Mixture of volumetric primitives for efficient neural rendering. ACM Transactions on Graphics (Proceedings of SIGGRAPH)40(4) (2021) 3

  36. [36]

    Advances in Neural Information Processing Systems35, 3165–3177 (2022) 3

    Luo, A., Du, Y., Tarr, M., Tenenbaum, J., Torralba, A., Gan, C.: Learning neural acoustic fields. Advances in Neural Information Processing Systems35, 3165–3177 (2022) 3

  37. [38]

    Mai, A., Hedstrom, T., Kopanas, G., Kontkanen, J., Kuester, F., Barron, J.T.: Radiance meshes for volumetric reconstruction (2025),https://arxiv.org/abs/ 2512.040764

  38. [39]

    2021 , issue_date =

    Martel, J.N.P., Lindell, D.B., Lin, C.Z., Chan, E.R., Monteiro, M., Wetzstein, G.: ACORN adaptive coordinate networks for neural scene representation. ACM Transactions on Graphics (Proceedings of SIGGRAPH)40(4) (2021).https: //doi.org/10.1145/3450626.3459785,https://doi.org/10.1145/3450626. 34597853

  39. [40]

    Mixed Precision Training

    Micikevicius, P., Narang, S., Alben, J., Diamos, G., Elsen, E., Garcia, D., Ginsburg, B., Houston, M., Kuchaiev, O., Venkatesh, G., et al.: Mixed precision training. arXiv preprint arXiv:1710.03740 (2017) 9

  40. [41]

    In: Proceedings of ECCV (2020) 2, 3, 4

    Mildenhall, B., Srinivasan, P.P., Tancik, M., Barron, J.T., Ramamoorthi, R., Ng, R.: NeRF: Representing scenes as neural radiance fields for view synthesis. In: Proceedings of ECCV (2020) 2, 3, 4

  41. [42]

    ACM Transactions on Graphics and SIGGRAPH Asia (2024) 4

    Moenne-Loccoz, N., Mirzaei, A., Perel, O., de Lutio, R., Esturo, J.M., State, G., Fidler, S., Sharp, N., Gojcic, Z.: 3d gaussian ray tracing: Fast tracing of particle scenes. ACM Transactions on Graphics and SIGGRAPH Asia (2024) 4

  42. [43]

    ACM Trans

    Mujkanovic, F., Nsampi, N.E., Theobalt, C., Seidel, H.P., Leimkühler, T.: Neural gaussian scale-space fields. ACM Trans. Graph.43(4) (Jul 2024).https://doi. org/10.1145/3658163,https://doi.org/10.1145/36581633

  43. [44]

    ACM Trans

    Müller, T., Evans, A., Schied, C., Keller, A.: Instant neural graphics primitives with a multiresolution hash encoding. ACM Trans. Graph.41(4) (2022) 2, 3, 4, 6, 7, 8, 9, 10, 14, 15, 28, 33

  44. [45]

    ACM Transactions on Graphics (TOG)38(5), 1–19 (2019) 6 Neural Harmonic Textures 19

    Müller, T., McWilliams, B., Rousselle, F., Gross, M., Novák, J.: Neural importance sampling. ACM Transactions on Graphics (TOG)38(5), 1–19 (2019) 6 Neural Harmonic Textures 19

  45. [46]

    In: Proceedings of the IEEE/CVF conference on computer vision and pattern recognition

    Park,J.J.,Florence,P.,Straub,J.,Newcombe,R.,Lovegrove,S.:Deepsdf:Learning continuous signed distance functions for shape representation. In: Proceedings of the IEEE/CVF conference on computer vision and pattern recognition. pp. 165– 174 (2019) 3

  46. [47]

    In: Proceedings of the IEEE/CVF conference on computer vision and pattern recognition

    Pumarola, A., Corona, E., Pons-Moll, G., Moreno-Noguer, F.: D-nerf: Neural ra- diance fields for dynamic scenes. In: Proceedings of the IEEE/CVF conference on computer vision and pattern recognition. pp. 10318–10327 (2021) 3

  47. [48]

    In: Proceedings of ICCV (2021) 3

    Reiser, C., Peng, S., Liao, Y., Geiger, A.: KiloNeRF: Speeding up Neural Radiance Fields with Thousands of Tiny MLPs. In: Proceedings of ICCV (2021) 3

  48. [49]

    In: Proceedings of the IEEE/CVF conference on computer vision and pattern recognition

    Saragadam, V., LeJeune, D., Tan, J., Balakrishnan, G., Veeraraghavan, A., Bara- niuk, R.G.: Wire: Wavelet implicit neural representations. In: Proceedings of the IEEE/CVF conference on computer vision and pattern recognition. pp. 18507– 18516 (2023) 3

  49. [50]

    Advances in neural information processing systems33, 7462–7473 (2020) 2

    Sitzmann, V., Martel, J., Bergman, A., Lindell, D., Wetzstein, G.: Implicit neural representations with periodic activation functions. Advances in neural information processing systems33, 7462–7473 (2020) 2

  50. [51]

    In: Proc

    Sitzmann, V., Martel, J.N., Bergman, A.W., Lindell, D.B., Wetzstein, G.: Implicit neural representations with periodic activation functions. In: Proc. NeurIPS (2020) 3, 4

  51. [52]

    In: Proceedings of the SIGGRAPH Asia 2025 Conference Papers

    Su, R., Dong, H., Jin, H., Chen, Y., Wang, G., Li, S.: Vertex features for neu- ral global illumination. In: Proceedings of the SIGGRAPH Asia 2025 Conference Papers. pp. 1–11 (2025) 26

  52. [53]

    In: CVPR (2022) 7

    Sun, C., Sun, M., Chen, H.: Direct voxel grid optimization: Super-fast convergence for radiance fields reconstruction. In: CVPR (2022) 7

  53. [54]

    arXiv preprint arXiv:2411.08508 (2024) 5

    Svitov, D., Morerio, P., Agapito, L., Del Bue, A.: Billboard splatting (bb- splat): Learnable textured primitives for novel view synthesis. arXiv preprint arXiv:2411.08508 (2024) 5

  54. [55]

    In: Proceedings of the IEEE/CVF conference on computer vision and pattern recognition

    Takikawa, T., Litalien, J., Yin, K., Kreis, K., Loop, C., Nowrouzezahrai, D., Ja- cobson, A., McGuire, M., Fidler, S.: Neural geometric level of detail: Real-time rendering with implicit 3d shapes. In: Proceedings of the IEEE/CVF conference on computer vision and pattern recognition. pp. 11358–11367 (2021) 3, 6

  55. [56]

    In: SIGGRAPH Asia 2023 Conference Papers

    Takikawa, T., Müller, T., Nimier-David, M., Evans, A., Fidler, S., Jacobson, A., Keller, A.: Compact neural graphics primitives with learned hash probing. In: SIGGRAPH Asia 2023 Conference Papers. SA ’23, Association for Computing Machinery, New York, NY, USA (2023).https://doi.org/10.1145/3610548. 3618167,https://doi.org/10.1145/3610548.36181672

  56. [57]

    NeurIPS (2020) 2, 3, 4, 6

    Tancik, M., Srinivasan, P.P., Mildenhall, B., Fridovich-Keil, S., Raghavan, N., Sing- hal, U., Ramamoorthi, R., Barron, J.T., Ng, R.: Fourier features let networks learn high frequency functions in low dimensional domains. NeurIPS (2020) 2, 3, 4, 6

  57. [58]

    Acm Transactions on Graphics (Proceedings of SIGGRAPH) 38(4) (2019) 3, 4

    Thies, J., Zollhöfer, M., Nießner, M.: Deferred neural rendering: Image synthesis using neural textures. Acm Transactions on Graphics (Proceedings of SIGGRAPH) 38(4) (2019) 3, 4

  58. [59]

    In: Proceedings of the Computer Vision and Pattern Recognition Conference

    Wang, J., Chen, M., Karaev, N., Vedaldi, A., Rupprecht, C., Novotny, D.: Vggt: Visual geometry grounded transformer. In: Proceedings of the Computer Vision and Pattern Recognition Conference. pp. 5294–5306 (2025) 2

  59. [60]

    arXiv preprint arXiv:2106.10689 (2021) 3

    Wang, P., Liu, L., Liu, Y., Theobalt, C., Komura, T., Wang, W.: Neus: Learning neural implicit surfaces by volume rendering for multi-view reconstruction. arXiv preprint arXiv:2106.10689 (2021) 3

  60. [61]

    $\pi^3$: Permutation-Equivariant Visual Geometry Learning

    Wang, Y., Zhou, J., Zhu, H., Chang, W., Zhou, Y., Li, Z., Chen, J., Pang, J., Shen, C., He, T.:π 3: Permutation-equivariant visual geometry learning (2025), https://arxiv.org/abs/2507.133472 20 J. Condor et al

  61. [62]

    In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR) (2024) 2, 4

    Wu, G., Yi, T., Fang, J., Xie, L., Zhang, X., Wei, W., Liu, W., Tian, Q., Wang, X.: 4D gaussian splatting for real-time dynamic scene rendering. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR) (2024) 2, 4

  62. [63]

    IEEE Transactions on Visualization and Computer Graphics pp

    Wu, Q., Bauer, D., Doyle, M.J., Ma, K.L.: Interactive volume visualization via multi-resolution hash encoding based neural representation. IEEE Transactions on Visualization and Computer Graphics pp. 1–14 (2023).https://doi.org/10. 1109/TVCG.2023.32931213

  63. [64]

    IEEE Transac- tions on Visualization and Computer Graphics31(9), 5199–5214 (2025).https: //doi.org/10.1109/TVCG.2024.34327103

    Wu, Q., Insley, J.A., Mateevitsi, V.A., Rizzi, S., Papka, M.E., Ma, K.L.: Dis- tributed neural representation for reactive in situ visualization. IEEE Transac- tions on Visualization and Computer Graphics31(9), 5199–5214 (2025).https: //doi.org/10.1109/TVCG.2024.34327103

  64. [65]

    Conference on Computer Vision and Pattern Recognition (CVPR) (2025) 4, 5, 6, 7, 8, 9, 10

    Wu, Q., Martinez Esturo, J., Mirzaei, A., Moenne-Loccoz, N., Gojcic, Z.: 3dgut: Enabling distorted cameras and secondary rays in gaussian splatting. Conference on Computer Vision and Pattern Recognition (CVPR) (2025) 4, 5, 6, 7, 8, 9, 10

  65. [66]

    In: ACM SIGGRAPH 2024 Posters

    Wurster, S., Zhang, R., Zheng, C.: Gabor splatting for high-quality gigapixel image representations. In: ACM SIGGRAPH 2024 Posters. SIGGRAPH ’24, Association for Computing Machinery, New York, NY, USA (2024).https://doi.org/10. 1145/3641234.3671081,https://doi.org/10.1145/3641234.36710815

  66. [67]

    In: Computer graphics forum

    Xie, Y., Takikawa, T., Saito, S., Litany, O., Yan, S., Khan, N., Tombari, F., Tomp- kin, J., Sitzmann, V., Sridhar, S.: Neural fields in visual computing and beyond. In: Computer graphics forum. vol. 41, pp. 641–676. Wiley Online Library (2022) 3

  67. [68]

    In: European Conference on Computer Vision

    Xu, T.X., Hu, W., Lai, Y.K., Shan, Y., Zhang, S.H.: Texture-gs: Disentangling the geometry and texture for 3d gaussian splatting editing. In: European Conference on Computer Vision. pp. 37–53. Springer (2024) 5

  68. [69]

    In: ACM SIGGRAPH 2023 Conference Proceedings (2023) 4

    Yariv, L., Hedman, P., Reiser, C., Verbin, D., Srinivasan, P.P., Szeliski, R., Barron, J.T., Mildenhall, B.: BakedSDF: Meshing neural SDFs for real-time view synthesis. In: ACM SIGGRAPH 2023 Conference Proceedings (2023) 4

  69. [70]

    Journal of Machine Learning Research26(34), 1–17 (2025) 9, 32

    Ye, V., Li, R., Kerr, J., Turkulainen, M., Yi, B., Pan, Z., Seiskari, O., Ye, J., Hu, J., Tancik, M., Kanazawa, A.: gsplat: An open-source library for gaussian splatting. Journal of Machine Learning Research26(34), 1–17 (2025) 9, 32

  70. [71]

    Journal of Machine Learning Research26(34), 1–17 (2025) 11

    Ye, V., Li, R., Kerr, J., Turkulainen, M., Yi, B., Pan, Z., Seiskari, O., Ye, J., Hu, J., Tancik, M., Kanazawa, A.: gsplat: An open-source library for gaussian splatting. Journal of Machine Learning Research26(34), 1–17 (2025) 11

  71. [72]

    European Conference on Computer Vision (2024) 2

    Zhang, K., Bi, S., Tan, H., Xiangli, Y., Zhao, N., Sunkavalli, K., Xu, Z.: Gs-lrm: Large reconstruction model for 3d gaussian splatting. European Conference on Computer Vision (2024) 2

  72. [73]

    In: Proceedings of the IEEE/CVF International Conference on Computer Vision

    Zhang, X., Chen, A., Xiong, J., Dai, P., Shen, Y., Xu, W.: Neural shell texture splatting: More details and fewer primitives. In: Proceedings of the IEEE/CVF International Conference on Computer Vision. pp. 25229–25238 (2025) 2, 3, 4

  73. [74]

    In: Proceedings of the IEEE/CVF International Conference on Computer Vision (ICCV) (2025) 8, 10

    Zhang, X., Chen, A., Xiong, J., Dai, P., Shen, Y., Xu, W.: Neural shell texture splatting: More details and fewer primitives. In: Proceedings of the IEEE/CVF International Conference on Computer Vision (ICCV) (2025) 8, 10

  74. [75]

    arXiv preprint arXiv:2508.05343 (2025) 5

    Zhou, J., Huang, Y., Dai, W., Zou, J., Zheng, Z., Kan, N., Li, C., Xiong, H.: 3dgabsplat: 3d gabor splatting for frequency-adaptive radiance field rendering. arXiv preprint arXiv:2508.05343 (2025) 5

  75. [76]

    In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition

    Zhou,S.,Chang,H.,Jiang,S.,Fan,Z.,Zhu,Z.,Xu,D.,Chari,P.,You,S.,Wang,Z., Kadambi, A.: Feature 3dgs: Supercharging 3d gaussian splatting to enable distilled feature fields. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. pp. 21676–21685 (2024) 13, 14, 23 Neural Harmonic Textures 21

  76. [77]

    arXiv preprint arXiv:2510.08491 (2025) 3 22 J

    Zhou, X., Nguyen, B.H., Magne, L., Golyanik, V., Leimkühler, T., Theobalt, C.: Splat the net: Radiance fields with splattable neural primitives. arXiv preprint arXiv:2510.08491 (2025) 3 22 J. Condor et al. MipNeRF360 Tanks & Temples Deep Blending Method PSNR↑SSIM↑LPIPS↓PSNR↑SSIM↑LPIPS↓PSNR↑SSIM↑LPIPS↓ 3DGS-MCMC + SH 28.21 0.841 0.214 24.46 0.866 0.174 29....