pith. machine review for the scientific record. sign in

arxiv: 2604.17721 · v1 · submitted 2026-04-20 · 💻 cs.CV · cs.AI

Recognition: unknown

GeGS-PCR: Effective and Robust 3D Point Cloud Registration with Two-Stage Color-Enhanced Geometric-3DGS Fusion

Authors on Pith no claims yet

Pith reviewed 2026-05-10 05:46 UTC · model grok-4.3

classification 💻 cs.CV cs.AI
keywords point cloud registration3D Gaussian splattingcolor featuresgeometric fusionlow overlap3DGSLORA optimizationphotometric loss
0
0 comments X

The pith

Fusing color-enhanced geometry with 3D Gaussian splats registers point clouds robustly at low overlap.

A machine-rendered reading of the paper's core claim, the machinery that carries it, and where it could break.

The paper proposes GeGS-PCR to solve point cloud registration when overlap is low or data incomplete. It uses a color encoder to pull multi-level features from the point cloud and combines them with geometric info in a Geometric-3DGS module. LORA optimization keeps the model efficient while a joint loss refines the result using both geometry and color. Testing on colorized Kitti and other datasets shows much higher accuracy than prior methods. This matters because reliable registration is key for 3D mapping and robotics in real-world messy conditions.

Core claim

GeGS-PCR is a two-stage method that first extracts and enhances color features alongside geometry, then fuses them with 3DGS in the Geometric-3DGS module to create invariant context. With LORA and differentiable rendering plus photometric loss, it reaches 99.9% registration recall, 0.013 relative rotation error, and 0.024 translation error on Color3DMatch and Color3DLoMatch, doubling precision over previous approaches.

What carries the argument

The Geometric-3DGS module encodes local neighborhood information of colored superpoints to produce a globally invariant geometric-color context.

If this is right

  • The method achieves 99.9% registration recall on Color3DMatch and Color3DLoMatch.
  • Relative rotation error drops to 0.013 and translation error to 0.024.
  • Performance improves by at least a factor of 2 over prior methods.
  • Strong results hold even in extremely low-overlap scenarios.
  • Fast differentiable rendering aids convergence during registration.

Where Pith is reading between the lines

These are editorial extensions of the paper, not claims the author makes directly.

  • This fusion strategy might extend to other 3D tasks like reconstruction or object detection where color cues help disambiguate geometry.
  • The reliance on 3DGS opens possibilities for integrating with novel view synthesis pipelines.
  • Testing on additional real-world datasets with varying lighting could reveal limits of the color enhancement.

Load-bearing premise

The color information in the point cloud remains reliable and discriminative in low-overlap and incomplete scenarios.

What would settle it

Observing that on a dataset with unreliable or absent color data the registration errors exceed those of pure geometric methods would falsify the advantage of the color-enhanced fusion.

Figures

Figures reproduced from arXiv: 2604.17721 by Haiduo Huang, Jiayi Tian, Pengju Ren, Tian Xia, Wenzhe Zhao.

Figure 1
Figure 1. Figure 1: In scenarios with minimal overlap, incomplete geomet [PITH_FULL_IMAGE:figures/full_fig_p002_1.png] view at source ↗
Figure 2
Figure 2. Figure 2: Pipeline. The entire network backbone is divided into coarse and fine scales. The feature [PITH_FULL_IMAGE:figures/full_fig_p004_2.png] view at source ↗
Figure 3
Figure 3. Figure 3: Registration performance with GeGS-PCR and Geometric self-attention. [PITH_FULL_IMAGE:figures/full_fig_p008_3.png] view at source ↗
Figure 4
Figure 4. Figure 4: Registration performance with GeGS-PCR and Geometric self-attention. [PITH_FULL_IMAGE:figures/full_fig_p010_4.png] view at source ↗
Figure 5
Figure 5. Figure 5: Left: The structure of 3DGS self-attention module. Right: The computation graph of 3DGS [PITH_FULL_IMAGE:figures/full_fig_p015_5.png] view at source ↗
Figure 6
Figure 6. Figure 6: Comparison of Training Loss with and without LoRA [PITH_FULL_IMAGE:figures/full_fig_p017_6.png] view at source ↗
Figure 7
Figure 7. Figure 7: Registration results on Color3DMatch and Color3DLoMatch. [PITH_FULL_IMAGE:figures/full_fig_p021_7.png] view at source ↗
Figure 8
Figure 8. Figure 8: Registration results on ColorKitti. 22 [PITH_FULL_IMAGE:figures/full_fig_p022_8.png] view at source ↗
Figure 9
Figure 9. Figure 9: Registration performance with GeGS-PCR and Geometric Self-Attention across various [PITH_FULL_IMAGE:figures/full_fig_p023_9.png] view at source ↗
read the original abstract

We address the challenge of point cloud registration using color information, where traditional methods relying solely on geometric features often struggle in low-overlap and incomplete scenarios. To overcome these limitations, we propose GeGS-PCR, a novel two-stage method that combines geometric, color, and Gaussian information for robust registration. Our approach incorporates a dedicated color encoder that enhances color features by extracting multi-level geometric and color data from the original point cloud. We introduce the \textbf{Ge}ometric-3D\textbf{GS} module, which encodes the local neighborhood information of colored superpoints to ensure a globally invariant geometric-color context. Leveraging LORA optimization, we maintain high performance while preserving the expressiveness of 3DGS. Additionally, fast differentiable rendering is utilized to refine the registration process, leading to improved convergence. To further enhance performance, we propose a joint photometric loss that exploits both geometric and color features. This enables strong performance in challenging conditions with extremely low point cloud overlap. We validate our method by colorizing the Kitti dataset as ColorKitti and testing on both Color3DMatch and Color3DLoMatch datasets. Our method achieves state-of-the-art performance with \textit{Registration Recall} at 99.9\%, \textit{Relative Rotation Error} as low as 0.013, and \textit{Relative Translation Error} as low as 0.024, improving precision by at least a factor of 2.

Editorial analysis

A structured set of objections, weighed in public.

Desk editor's note, referee report, simulated authors' rebuttal, and a circularity audit. Tearing a paper down is the easy half of reading it; the pith above is the substance, this is the friction.

Referee Report

1 major / 2 minor

Summary. The manuscript proposes GeGS-PCR, a two-stage point cloud registration method that fuses geometric features with color information and 3D Gaussian Splatting (3DGS). It introduces a dedicated color encoder to extract multi-level geometric-color features from the input point cloud, a Geometric-3DGS module that encodes local neighborhoods of colored superpoints to produce globally invariant context, LORA-based optimization for efficiency, fast differentiable rendering for refinement, and a joint photometric loss combining geometric and color cues. The approach is validated on colorized versions of KITTI (ColorKitti), 3DMatch, and 3DLoMatch, claiming state-of-the-art results with 99.9% registration recall, relative rotation error as low as 0.013, and relative translation error as low as 0.024, for at least a 2x precision gain especially under low overlap.

Significance. If the performance claims hold under rigorous verification, the work could meaningfully improve registration robustness in low-overlap and incomplete regimes by leveraging color alongside geometry and 3DGS, with the LORA and differentiable rendering components offering practical efficiency benefits. The explicit construction of colorized datasets and the two-stage fusion strategy represent a clear attempt to address a known weakness of pure geometric methods.

major comments (1)
  1. [Experimental validation] Experimental validation (Section 4 / results on Color3DLoMatch): The headline SOTA metrics (99.9% RR, RRE=0.013, RTE=0.024) and the factor-of-2 precision claim rest on the premise that the color encoder plus GeGS-PCR fusion yields globally invariant geometric-color features even at minimal overlap. No ablation is described that perturbs or removes color information while holding geometry fixed (e.g., by adding view-dependent noise, inconsistent coloring, or color dropout in the overlap region). Without such controls, it remains possible that the reported gains are artifacts of the particular colorization procedure rather than a general property of the architecture.
minor comments (2)
  1. [Abstract / Dataset preparation] The abstract states that the KITTI dataset is colorized as ColorKitti but provides no description of the colorization procedure, point-wise color assignment, or consistency checks across views; this detail is needed for reproducibility.
  2. [Experiments] The manuscript claims 'strong performance in challenging conditions with extremely low point cloud overlap' but does not quantify the overlap ranges tested or report failure cases, which would strengthen the robustness narrative.

Simulated Author's Rebuttal

1 responses · 0 unresolved

We thank the referee for the constructive feedback on our manuscript. We address the major comment on experimental validation below and will incorporate the suggested controls in the revision.

read point-by-point responses
  1. Referee: [Experimental validation] Experimental validation (Section 4 / results on Color3DLoMatch): The headline SOTA metrics (99.9% RR, RRE=0.013, RTE=0.024) and the factor-of-2 precision claim rest on the premise that the color encoder plus GeGS-PCR fusion yields globally invariant geometric-color features even at minimal overlap. No ablation is described that perturbs or removes color information while holding geometry fixed (e.g., by adding view-dependent noise, inconsistent coloring, or color dropout in the overlap region). Without such controls, it remains possible that the reported gains are artifacts of the particular colorization procedure rather than a general property of the architecture.

    Authors: We agree that the absence of explicit ablations isolating the contribution of color information (while holding geometry fixed) leaves open the possibility that gains could be tied to the colorization procedure. In the revised manuscript we will add the following controls on Color3DLoMatch: (1) color dropout restricted to the overlap region, (2) injection of view-dependent color noise, and (3) a geometry-only ablation obtained by disabling the color encoder while keeping all other modules and training identical. These experiments will quantify the incremental benefit of the color-enhanced Geometric-3DGS fusion and will be reported alongside the existing results to substantiate the claim that the architecture produces globally invariant geometric-color features. revision: yes

Circularity Check

0 steps flagged

No circularity: empirical method extension with independent validation

full rationale

The paper presents GeGS-PCR as a two-stage fusion architecture that augments existing 3DGS and point-cloud registration pipelines with a dedicated color encoder, geometric-color superpoint encoding, LORA optimization, differentiable rendering, and a joint photometric loss. All components are described as novel combinations of prior techniques rather than derived from the paper's own outputs. Performance numbers (99.9% RR, RRE 0.013, RTE 0.024 on Color3DLoMatch) are reported as empirical results after colorizing KITTI/3DMatch variants and running the full pipeline; they are not obtained by fitting parameters to the target metrics and then relabeling the fit as a prediction. No equations, uniqueness theorems, or self-citations are invoked to force the central claims. The derivation chain is therefore self-contained and externally falsifiable via the stated datasets and ablations.

Axiom & Free-Parameter Ledger

0 free parameters · 0 axioms · 0 invented entities

Abstract-only review; no explicit free parameters, axioms, or invented entities are stated. The method appears to rely on standard deep-learning and 3DGS building blocks without introducing new postulated entities.

pith-pipeline@v0.9.0 · 5582 in / 1234 out tokens · 56610 ms · 2026-05-10T05:46:42.436555+00:00 · methodology

discussion (0)

Sign in with ORCID, Apple, or X to comment. Anyone can read and Pith papers without signing in.

Reference graph

Works this paper leans on

47 extracted references · 1 canonical work pages

  1. [1]

    Dreg-nerf: Deep registration for neural radiance fields

    Yu Chen and Gim Hee Lee. Dreg-nerf: Deep registration for neural radiance fields. InProceedings of the IEEE/CVF International Conference on Computer Vision, pages 22703–22713, 2023

  2. [2]

    Image-to-lidar self-supervised distillation for autonomous driving data

    Corentin Sautier, Gilles Puy, Spyros Gidaris, Alexandre Boulch, Andrei Bursuc, and Renaud Marlet. Image-to-lidar self-supervised distillation for autonomous driving data. InProceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), pages 9891–9901, 2022

  3. [3]

    3d gaussian splatting for real-time radiance field rendering.ACM Transactions on Graphics., 42(4):139–1, 2023

    Bernhard Kerbl, Georgios Kopanas, Thomas Leimkühler, and George Drettakis. 3d gaussian splatting for real-time radiance field rendering.ACM Transactions on Graphics., 42(4):139–1, 2023

  4. [4]

    Clip-fields: Weakly supervised semantic fields for robotic memory.arXiv preprint arXiv:2210.05663, 2022

    Nur Muhammad Mahi Shafiullah, Chris Paxton, Lerrel Pinto, Soumith Chintala, and Arthur Szlam. Clip-fields: Weakly supervised semantic fields for robotic memory.arXiv preprint arXiv:2210.05663, 2022

  5. [5]

    Nice-slam: Neural implicit scalable encoding for slam

    Zihan Zhu, Songyou Peng, Viktor Larsson, Weiwei Xu, Hujun Bao, Zhaopeng Cui, Martin R Oswald, and Marc Pollefeys. Nice-slam: Neural implicit scalable encoding for slam. InProceedings of the IEEE/CVF conference on computer vision and pattern recognition (CVPR), pages 12786–12796, 2022

  6. [6]

    Regtr: End-to-end point cloud correspondences with transformers

    Zi Jian Yew and Gim Hee Lee. Regtr: End-to-end point cloud correspondences with transformers. In Proceedings of the IEEE/CVF conference on computer vision and pattern recognition (CVPR), pages 6677–6686, 2022

  7. [7]

    Spinnet: Learning a general surface descriptor for 3d point cloud registration

    Sheng Ao, Qingyong Hu, Bo Yang, Andrew Markham, and Yulan Guo. Spinnet: Learning a general surface descriptor for 3d point cloud registration. InProceedings of the IEEE/CVF conference on computer vision and pattern recognition (CVPR), pages 11753–11762, 2021

  8. [8]

    Fully convolutional geometric features

    Christopher Choy, Jaesik Park, and Vladlen Koltun. Fully convolutional geometric features. InProceedings of the IEEE/CVF international conference on computer vision (ICCV), pages 8958–8966, 2019

  9. [9]

    D3feat: Joint learning of dense detection and description of 3d local features

    Xuyang Bai, Zixin Luo, Lei Zhou, Hongbo Fu, Long Quan, and Chiew-Lan Tai. D3feat: Joint learning of dense detection and description of 3d local features. InProceedings of the IEEE/CVF conference on computer vision and pattern recognition (CVPR), pages 6359–6367, 2020

  10. [10]

    Ppfnet: Global context aware local features for robust 3d point matching

    Haowen Deng, Tolga Birdal, and Slobodan Ilic. Ppfnet: Global context aware local features for robust 3d point matching. InProceedings of the IEEE/CVF conference on computer vision and pattern recognition (CVPR), pages 195–205, 2018

  11. [11]

    Feature-metric registration: A fast semi-supervised approach for robust point cloud registration without correspondences

    Xiaoshui Huang, Guofeng Mei, and Jian Zhang. Feature-metric registration: A fast semi-supervised approach for robust point cloud registration without correspondences. InProceedings of the IEEE/CVF conference on computer vision and pattern recognition (CVPR), pages 11366–11374, 2020

  12. [12]

    Loftr: Detector-free local feature matching with transformers

    Jiaming Sun, Zehong Shen, Yuang Wang, Hujun Bao, and Xiaowei Zhou. Loftr: Detector-free local feature matching with transformers. InProceedings of the IEEE/CVF conference on computer vision and pattern recognition (CVPR), pages 8922–8931, 2021

  13. [13]

    Patch2pix: Epipolar-guided pixel-level correspon- dences

    Qunjie Zhou, Torsten Sattler, and Laura Leal-Taixe. Patch2pix: Epipolar-guided pixel-level correspon- dences. InProceedings of the IEEE/CVF conference on computer vision and pattern recognition (CVPR), pages 4669–4678, 2021

  14. [14]

    3d registration with maximal cliques

    Xiyu Zhang, Jiaqi Yang, Shikun Zhang, and Yanning Zhang. 3d registration with maximal cliques. In Proceedings of the IEEE/CVF conference on computer vision and pattern recognition (CVPR), pages 17745–17754, 2023

  15. [15]

    Geometric transformer for fast and robust point cloud registration

    Zheng Qin, Hao Yu, Changjian Wang, Yulan Guo, Yuxing Peng, and Kai Xu. Geometric transformer for fast and robust point cloud registration. InProceedings of the IEEE/CVF conference on computer vision and pattern recognition (CVPR), pages 11143–11152, 2022

  16. [16]

    Robust point cloud registration framework based on deep graph matching

    Kexue Fu, Shaolei Liu, Xiaoyuan Luo, and Manning Wang. Robust point cloud registration framework based on deep graph matching. InProceedings of the IEEE/CVF conference on computer vision and pattern recognition (CVPR), pages 8893–8902, 2021

  17. [17]

    Deepgmr: Learning latent gaussian mixture models for registration

    Wentao Yuan, Benjamin Eckart, Kihwan Kim, Varun Jampani, Dieter Fox, and Jan Kautz. Deepgmr: Learning latent gaussian mixture models for registration. InEuropean conference on computer vision (ECCV), pages 733–750. Springer, 2020

  18. [18]

    Deep global registration

    Christopher Choy, Wei Dong, and Vladlen Koltun. Deep global registration. InProceedings of the IEEE/CVF conference on computer vision and pattern recognition (CVPR), pages 2514–2523, 2020. 11

  19. [19]

    Peal: Prior-embedded explicit attention learning for low-overlap point cloud registration

    Junle Yu, Luwei Ren, Yu Zhang, Wenhui Zhou, Lili Lin, and Guojun Dai. Peal: Prior-embedded explicit attention learning for low-overlap point cloud registration. InProceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), pages 17702–17711, 2023

  20. [20]

    Colorpcr: Color point cloud registration with multi-stage geometric-color fusion

    Juncheng Mu, Lin Bie, Shaoyi Du, and Yue Gao. Colorpcr: Color point cloud registration with multi-stage geometric-color fusion. InProceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), pages 21061–21070, 2024

  21. [21]

    Lora: Low-rank adaptation of large language models.ICLR, 1(2):3, 2022

    Edward J Hu, Yelong Shen, Phillip Wallis, Zeyuan Allen-Zhu, Yuanzhi Li, Shean Wang, Lu Wang, Weizhu Chen, et al. Lora: Low-rank adaptation of large language models.ICLR, 1(2):3, 2022

  22. [22]

    Besl and Neil D

    P.J. Besl and Neil D. McKay. A method for registration of 3-d shapes.IEEE Transactions on Pattern Analysis and Machine Intelligence, 14(2):239–256, 1992

  23. [23]

    Fully convolutional geometric features

    Christopher Choy, Jaesik Park, and Vladlen Koltun. Fully convolutional geometric features. In2019 IEEE/CVF International Conference on Computer Vision (ICCV), pages 8957–8965, 2019

  24. [24]

    Pointdsc: Robust point cloud registration using deep spatial consistency

    Xuyang Bai, Zixin Luo, Lei Zhou, Hongkai Chen, Lei Li, Zeyu Hu, Hongbo Fu, and Chiew-Lan Tai. Pointdsc: Robust point cloud registration using deep spatial consistency. InProceedings of the IEEE/CVF conference on computer vision and pattern recognition (CVPR), pages 15859–15869, 2021

  25. [25]

    The perfect match: 3d point cloud matching with smoothed densities

    Zan Gojcic, Caifa Zhou, Jan D Wegner, and Andreas Wieser. The perfect match: 3d point cloud matching with smoothed densities. InProceedings of the IEEE/CVF conference on computer vision and pattern recognition (CVPR), pages 5545–5554, 2019

  26. [26]

    Joint alignment of multiple point sets with batch and incremental expectation-maximization.IEEE transactions on pattern analysis and machine intelligence, 40(6):1397–1410, 2017

    Georgios Dimitrios Evangelidis and Radu Horaud. Joint alignment of multiple point sets with batch and incremental expectation-maximization.IEEE transactions on pattern analysis and machine intelligence, 40(6):1397–1410, 2017

  27. [27]

    Unsupervised point cloud registration by learning unified gaussian mixture models.IEEE Robotics and Automation Letters, 7(3):7028–7035, 2022

    Xiaoshui Huang, Sheng Li, Yifan Zuo, Yuming Fang, Jian Zhang, and Xiaowei Zhao. Unsupervised point cloud registration by learning unified gaussian mixture models.IEEE Robotics and Automation Letters, 7(3):7028–7035, 2022

  28. [28]

    Predator: Registration of 3d point clouds with low overlap

    Shengyu Huang, Zan Gojcic, Mikhail Usvyatsov, Andreas Wieser, and Konrad Schindler. Predator: Registration of 3d point clouds with low overlap. InProceedings of the IEEE/CVF Conference on computer vision and pattern recognition (CVPR), pages 4267–4276, 2021

  29. [29]

    Pointnet: Deep learning on point sets for 3d classification and segmentation

    Charles R Qi, Hao Su, Kaichun Mo, and Leonidas J Guibas. Pointnet: Deep learning on point sets for 3d classification and segmentation. InProceedings of the IEEE conference on computer vision and pattern recognition (CVPR), pages 652–660, 2017

  30. [30]

    Pointnet++: Deep hierarchical feature learning on point sets in a metric space.Advances in neural information processing systems, 30, 2017

    Charles Ruizhongtai Qi, Li Yi, Hao Su, and Leonidas J Guibas. Pointnet++: Deep hierarchical feature learning on point sets in a metric space.Advances in neural information processing systems, 30, 2017

  31. [31]

    Feature pyramid networks for object detection

    Tsung-Yi Lin, Piotr Dollár, Ross Girshick, Kaiming He, Bharath Hariharan, and Serge Belongie. Feature pyramid networks for object detection. InProceedings of the IEEE conference on computer vision and pattern recognition (CVPR), pages 2117–2125, 2017

  32. [32]

    Kpconv: Flexible and deformable convolution for point clouds

    Hugues Thomas, Charles R Qi, Jean-Emmanuel Deschaud, Beatriz Marcotegui, François Goulette, and Leonidas J Guibas. Kpconv: Flexible and deformable convolution for point clouds. InProceedings of the IEEE/CVF international conference on computer vision (ICCV), pages 6411–6420, 2019

  33. [33]

    Bootstrap your own correspondences

    Mohamed El Banani and Justin Johnson. Bootstrap your own correspondences. InProceedings of the IEEE/CVF International Conference on Computer Vision (CVPR), pages 6433–6442, 2021

  34. [34]

    Improving rgb-d point cloud registration by learning multi-scale local linear transformation

    Ziming Wang, Xiaoliang Huo, Zhenghao Chen, Jing Zhang, Lu Sheng, and Dong Xu. Improving rgb-d point cloud registration by learning multi-scale local linear transformation. InEuropean Conference on Computer Vision (ECCV), pages 175–191. Springer, 2022

  35. [35]

    Pointmbf: A multi-scale bidirectional fusion network for unsupervised rgb-d point cloud registration

    Mingzhi Yuan, Kexue Fu, Zhihao Li, Yucong Meng, and Manning Wang. Pointmbf: A multi-scale bidirectional fusion network for unsupervised rgb-d point cloud registration. InProceedings of the IEEE/CVF International Conference on Computer Vision (ICCV), pages 17694–17705, 2023

  36. [36]

    Pcr-cg: Point cloud registration via deep explicit color and geometry

    Yu Zhang, Junle Yu, Xiaolin Huang, Wenhui Zhou, and Ji Hou. Pcr-cg: Point cloud registration via deep explicit color and geometry. InEuropean conference on computer vision (ECCV), pages 443–459. Springer, 2022

  37. [37]

    Colored point cloud registration revisited

    Jaesik Park, Qian-Yi Zhou, and Vladlen Koltun. Colored point cloud registration revisited. InProceedings of the IEEE international conference on computer vision (ICCV), pages 143–152, 2017. 12

  38. [38]

    Color point cloud registration with 4d icp algorithm

    Hao Men, Biruk Gebre, and Kishore Pochiraju. Color point cloud registration with 4d icp algorithm. In 2011 IEEE International Conference on Robotics and Automation, pages 1511–1516. IEEE, 2011

  39. [39]

    Control color: Multimodal diffusion-based interactive image colorization: Z

    Zhexin Liang, Zhaochen Li, Shangchen Zhou, Chongyi Li, and Chen Change Loy. Control color: Multimodal diffusion-based interactive image colorization: Z. liang et al.International Journal of Computer Vision, pages 1–27, 2025

  40. [40]

    Blind quality assessment of dense 3d point clouds with structure guided resampling.ACM Transactions on Multimedia Computing, Communications and Applications, 20(8):1–21, 2024

    Wei Zhou, Qi Yang, Wu Chen, Qiuping Jiang, Guangtao Zhai, and Weisi Lin. Blind quality assessment of dense 3d point clouds with structure guided resampling.ACM Transactions on Multimedia Computing, Communications and Applications, 20(8):1–21, 2024

  41. [41]

    4d gaussian splatting for real-time dynamic scene rendering

    Guanjun Wu, Taoran Yi, Jiemin Fang, Lingxi Xie, Xiaopeng Zhang, Wei Wei, Wenyu Liu, Qi Tian, and Xinggang Wang. 4d gaussian splatting for real-time dynamic scene rendering. InProceedings of the IEEE/CVF conference on computer vision and pattern recognition (CVPR), pages 20310–20320, 2024

  42. [42]

    Mip-splatting: Alias-free 3d gaussian splatting

    Zehao Yu, Anpei Chen, Binbin Huang, Torsten Sattler, and Andreas Geiger. Mip-splatting: Alias-free 3d gaussian splatting. InProceedings of the IEEE/CVF conference on computer vision and pattern recognition (CVPR), pages 19447–19456, 2024

  43. [43]

    Dynamic 3d gaussians: Tracking by persistent dynamic view synthesis

    Jonathon Luiten, Georgios Kopanas, Bastian Leibe, and Deva Ramanan. Dynamic 3d gaussians: Tracking by persistent dynamic view synthesis. In2024 International Conference on 3D Vision (3DV), pages 800–809. IEEE, 2024

  44. [44]

    Cofinet: Reliable coarse-to-fine correspondences for robust pointcloud registration.Advances in Neural Information Processing Systems, 34:23872–23884, 2021

    Hao Yu, Fu Li, Mahdi Saleh, Benjamin Busam, and Slobodan Ilic. Cofinet: Reliable coarse-to-fine correspondences for robust pointcloud registration.Advances in Neural Information Processing Systems, 34:23872–23884, 2021

  45. [45]

    3dfeat-net: Weakly supervised local 3d features for point cloud registration

    Zi Jian Yew and Gim Hee Lee. 3dfeat-net: Weakly supervised local 3d features for point cloud registration. InProceedings of the European conference on computer vision (ECCV), pages 607–623, 2018

  46. [46]

    Hregnet: A hierarchical network for large-scale outdoor lidar point cloud registration

    Fan Lu, Guang Chen, Yinlong Liu, Lijun Zhang, Sanqing Qu, Shu Liu, and Rongqi Gu. Hregnet: A hierarchical network for large-scale outdoor lidar point cloud registration. InProceedings of the IEEE/CVF International Conference on Computer Vision (ICCV), pages 16014–16023, 2021

  47. [47]

    You only hypothesize once: Point cloud registration with rotation-equivariant descriptors

    Haiping Wang, Yuan Liu, Zhen Dong, and Wenping Wang. You only hypothesize once: Point cloud registration with rotation-equivariant descriptors. InProceedings of the 30th ACM International Conference on Multimedia, pages 1630–1641, 2022. 13 A Technical Appendices and Supplementary Material A.1 Proof of photometric optimization To rigorously analyze the con...