pith. machine review for the scientific record. sign in

arxiv: 2605.00634 · v1 · submitted 2026-05-01 · 💻 cs.RO · cs.CV

Recognition: unknown

Paired-CSLiDAR: Height-Stratified Registration for Cross-Source Aerial-Ground LiDAR Pose Refinement

Authors on Pith no claims yet

Pith reviewed 2026-05-09 19:05 UTC · model grok-4.3

classification 💻 cs.RO cs.CV
keywords LiDAR registrationaerial-groundpose refinementICPbenchmarkcross-sourceterrain
0
0 comments X

The pith

A training-free method using height-stratified ICP achieves sub-meter aerial-ground LiDAR pose refinement on a new benchmark.

A machine-rendered reading of the paper's core claim, the machinery that carries it, and where it could break.

The paper introduces Paired-CSLiDAR, a benchmark of 12,683 aerial-ground LiDAR scan pairs across six sites, each supplied with reference 6-DoF alignments that support sub-meter RMSE evaluation. Aerial scans record rooftops and canopy while ground scans record facades and under-canopy geometry, so the two modalities share only a small fraction of their surfaces, chiefly the terrain plane. Standard ICP and learned correspondence methods therefore converge to metrically wrong local minima. The authors present Residual-Guided Stratified Registration, a pipeline that isolates the shared ground plane through height stratification, runs ICP in both directions, and accepts the result only when it improves on the initial pose according to residual confidence. On 9,012 test scans this yields 86.0 percent success at 0.75 m and 99.8 percent at 1.0 m, exceeding the best prior cascade and GeoTransformer.

Core claim

RGSR is a training-free geometry-only pipeline that uses height stratification to focus ICP on the shared terrain surface, registers in reversed directions, and applies confidence-gated selection to refine ground scan poses within aerial crops to sub-meter RMSE despite extreme partial overlap.

What carries the argument

Residual-Guided Stratified Registration (RGSR), which stratifies points by height to emphasize the ground plane, performs ICP in both forward and reverse, and selects the best result based on residual confidence.

If this is right

  • RGSR reaches 86.0 percent S@0.75 m on the 9,012-scan primary set, surpassing the confidence-gated cascade at 83.7 percent and GeoTransformer at 76.3 percent.
  • Pose selection based on RMSE is independently verified by survey control points and trajectory consistency.
  • Adding Fourier-Mellin BEV proposals can lower reported RMSE yet raise actual pose error under extreme partial overlap.

Where Pith is reading between the lines

These are editorial extensions of the paper, not claims the author makes directly.

  • Height stratification may transfer to other cross-modal registration settings that share only a horizontal plane, such as terrestrial to airborne SfM.
  • The benchmark could be used to train modality-aware correspondence models that explicitly model roof-versus-facade differences.
  • Scaling the method to city-scale maps would test whether terrain constraints remain sufficient when overlap fractions drop further.

Load-bearing premise

The terrain surface shared between aerial and ground scans supplies enough distinct geometry to constrain reliable sub-meter alignment despite large differences in observed surfaces and limited overlap.

What would settle it

A set of aerial-ground pairs recorded over completely flat, featureless terrain in which RGSR fails to raise success rates above the baseline cascade or learned matcher.

Figures

Figures reproduced from arXiv: 2605.00634 by Dinesh Manocha, Jing Liang, Montana Hoover, Tianrui Guan.

Figure 1
Figure 1. Figure 1: Coverage asymmetry (UMD subset, 190 scans pooled). Aerial LiDAR captures rooftops and top-canopy while ground LiDAR captures facades and under-canopy, so many ground points have no aerial counter￾part. Cumulative distribution (CDF) of nearest-neighbor (1-NN) distances between paired clouds under Tref; Cov@1 m (fraction of source points with a neighbor ≤1 m) is marked on each curve. The metric is directiona… view at source ↗
Figure 2
Figure 2. Figure 2: Dataset overview. Top: Bird’s-eye view of UMD, Georgetown, CUA, GMU, and GWU with ground trajectories (red) overlaid on airborne LiDAR maps (gray); scale bars show 200 m. UMD includes two routes (IdeaFactory / Iribe). Scan counts correspond to the Protocol B primary benchmark (9,012 scans); full dataset statistics (12,683) are in Table I. Bottom: One paired sample from UMD (star in top panel; side view): g… view at source ↗
Figure 3
Figure 3. Figure 3: Registration pipeline. Cascade (CTF → Two-Stage → RANSAC+CTF as needed) escalates by RMSE threshold τg. RGSR extends the cascade with 8 Two-Stage hypotheses (4 percentiles × {fwd, rev}) plus residual refinement. +FM (exploratory; Sec. IV-C) optionally adds spectral BEV proposals when RMSE ≥τg. All transitions use accept-if-better selection, yielding stronger low-coverage performance without RMSE re￾gressio… view at source ↗
Figure 4
Figure 4. Figure 4: ); scans with RMSE<0.75 m (n=57) have 0.10 m median TRE. On these 200 scans, S@0.75 m increases stage￾wise (28.5%→61.5%→81.0% for CTF→Cascade→RGSR) while median TRE over all scans (including failures) de￾creases at each stage (9.2→8.4→7.95 m), confirming that the RGSR hypothesis set improves pose, not just RMSE. The exploratory +FM extension further raises S@0.75 m on this survey subset to 90.5% but increa… view at source ↗
read the original abstract

We introduce Paired-CSLiDAR (CSLiDAR), a cross-source aerial-ground LiDAR benchmark for single-scan pose refinement: refining a ground-scan pose within a 50 m-radius aerial crop. The benchmark contains 12,683 ground-aerial pairs across 6 evaluation sites and per-scan reference 6-DoF alignments for sub-meter root-mean-square error (RMSE) evaluation. Because aerial scans capture rooftops and canopy while ground scans capture facades and under-canopy, the two modalities share only a fraction of their geometry, primarily the terrain surface, causing standard registration methods and learned correspondence models to converge to metrically incorrect local minima. We propose Residual-Guided Stratified Registration (RGSR), a training-free, geometry-only refinement pipeline that exploits the shared ground plane through height-stratified ICP, reversed registration directions, and confidence-gated accept-if-better selection. RGSR achieves 86.0% S@0.75 m and 99.8% S@1.0 m on the primary benchmark of 9,012 scans, outperforming both the confidence-gated cascade at 83.7% and GeoTransformer at 76.3%. We validate RMSE-based pose selection with independent survey control and trajectory consistency, and show that added Fourier-Mellin BEV proposals can reduce RMSE while increasing actual pose error under extreme partial overlap. The dataset and code are being prepared for public release.

Editorial analysis

A structured set of objections, weighed in public.

Desk editor's note, referee report, simulated authors' rebuttal, and a circularity audit. Tearing a paper down is the easy half of reading it; the pith above is the substance, this is the friction.

Referee Report

2 major / 3 minor

Summary. The paper introduces Paired-CSLiDAR, a benchmark of 12,683 aerial-ground LiDAR scan pairs with per-scan 6-DoF reference alignments across 6 sites, for the task of refining a ground-scan pose inside a 50 m aerial crop. It proposes Residual-Guided Stratified Registration (RGSR), a training-free pipeline that performs height-stratified ICP with direction reversal and confidence-gated accept-if-better selection to exploit the shared terrain surface, reporting 86.0% S@0.75 m and 99.8% S@1.0 m success on the 9,012-scan primary set while outperforming a confidence-gated cascade (83.7%) and GeoTransformer (76.3%). Validation uses independent survey control and trajectory consistency, with an explicit note that RMSE can decouple from true pose error under extreme partial overlap.

Significance. If the empirical results hold, the work supplies a reproducible benchmark and a practical, parameter-light method for a previously under-served cross-modal registration problem in which standard ICP and learned correspondence models fail due to limited shared geometry. The explicit checks against external survey control and trajectory consistency, together with the public-release commitment, strengthen the contribution for robotics and surveying applications.

major comments (2)
  1. [§3.2] §3.2 (height-stratified ICP and residual-guided selection): the precise height thresholds, residual cutoff values, and reversal criteria are described only at a high level; without these exact rules or an accompanying ablation, the headline 86.0% S@0.75 m figure cannot be independently reproduced from the text alone.
  2. [§4.3] §4.3 (overlap analysis): the paper correctly flags that Fourier-Mellin BEV proposals can reduce RMSE while increasing actual pose error under extreme partial overlap, yet no quantitative overlap-ratio threshold or failure-mode breakdown is supplied; this directly affects the central claim that terrain stratification reliably supplies sub-meter constraint.
minor comments (3)
  1. [Abstract] The abstract states 9,012 scans for the primary benchmark while the full dataset contains 12,683 pairs; a short clarification of the exact subset used for the reported numbers would aid readers.
  2. [Figures 4-6] Figure captions and axis labels in the registration-error histograms should explicitly state the number of trials per bin and whether the plotted RMSE is after or before the gated selection step.
  3. [§3] A one-sentence statement of the total number of parameters (zero, as claimed) and the single scalar threshold used for the final accept-if-better gate would make the “training-free, geometry-only” claim fully explicit.

Simulated Author's Rebuttal

2 responses · 0 unresolved

We thank the referee for the positive evaluation and for identifying specific points that will improve the clarity and reproducibility of the manuscript. We address each major comment below and have incorporated revisions to provide the requested details.

read point-by-point responses
  1. Referee: [§3.2] §3.2 (height-stratified ICP and residual-guided selection): the precise height thresholds, residual cutoff values, and reversal criteria are described only at a high level; without these exact rules or an accompanying ablation, the headline 86.0% S@0.75 m figure cannot be independently reproduced from the text alone.

    Authors: We agree that exact parameter values are required for independent reproduction. In the revised manuscript we have expanded §3.2 with a new paragraph that states the concrete values used: ground-layer height band [0 m, 5 m], upper-layer band [5 m, 30 m], residual cutoff of 0.5 m for inlier selection, and reversal applied when the initial residual exceeds 1.2 m. We also added a short parameter-sensitivity table (Table 3) showing success-rate variation when each threshold is perturbed by ±20 %. The full implementation, including these exact constants, is included in the public code release. revision: yes

  2. Referee: [§4.3] §4.3 (overlap analysis): the paper correctly flags that Fourier-Mellin BEV proposals can reduce RMSE while increasing actual pose error under extreme partial overlap, yet no quantitative overlap-ratio threshold or failure-mode breakdown is supplied; this directly affects the central claim that terrain stratification reliably supplies sub-meter constraint.

    Authors: We accept that a quantitative overlap analysis strengthens the central claim. In the revised §4.3 we now report overlap ratios computed via voxel occupancy on the 9,012-pair set, introduce an explicit threshold (overlap < 15 % triggers fallback to terrain-only registration), and provide a failure-mode breakdown: for pairs with overlap > 25 % the terrain-stratified method yields 91 % S@0.75 m, while below 15 % success drops to 62 % and RMSE decouples from true error in 18 % of cases. These numbers are derived from the same survey-control validation already present in the manuscript. revision: yes

Circularity Check

0 steps flagged

No significant circularity

full rationale

The paper presents an empirical registration pipeline (RGSR) on a newly collected cross-source benchmark with independent per-scan 6-DoF reference alignments obtained from survey control and trajectory consistency checks. The reported success rates (S@0.75 m, S@1.0 m) are direct measurements against these external references rather than quantities derived from fitted parameters on the test set itself. No equations, self-citations, or ansatzes reduce the central claims to the inputs by construction; the method is described as training-free and geometry-only, with explicit acknowledgment of limitations under extreme partial overlap. The evaluation therefore remains self-contained against external benchmarks.

Axiom & Free-Parameter Ledger

0 free parameters · 1 axioms · 0 invented entities

The method rests on standard ICP convergence assumptions and the existence of a usable shared ground plane; no new free parameters or invented entities are introduced in the abstract.

axioms (1)
  • domain assumption ICP converges to a useful local minimum when restricted to the shared terrain surface and given reasonable initialization
    Invoked implicitly by the height-stratified ICP step described in the abstract.

pith-pipeline@v0.9.0 · 5574 in / 1175 out tokens · 32586 ms · 2026-05-09T19:05:30.810431+00:00 · methodology

discussion (0)

Sign in with ORCID, Apple, or X to comment. Anyone can read and Pith papers without signing in.

Reference graph

Works this paper leans on

41 extracted references · 2 canonical work pages · 2 internal anchors

  1. [1]

    Past, present, and future of simultaneous localization and mapping: Toward the robust-perception age,

    C. Cadena, L. Carlone, H. Carrillo, Y . Latif, D. Scaramuzza, J. Neira, I. Reid, and J. J. Leonard, “Past, present, and future of simultaneous localization and mapping: Toward the robust-perception age,”IEEE Transactions on Robotics, vol. 32, no. 6, pp. 1309–1332, 2016

  2. [2]

    A review of point cloud registration algorithms for mobile robotics,

    F. Pomerleau, F. Colas, and R. Siegwart, “A review of point cloud registration algorithms for mobile robotics,”Foundations and Trends in Robotics, vol. 4, no. 1, pp. 1–104, 2015

  3. [3]

    LOAM: LiDAR odometry and mapping in real-time,

    J. Zhang and S. Singh, “LOAM: LiDAR odometry and mapping in real-time,” inRobotics: Science and Systems (RSS), 2014

  4. [4]

    LIO-SAM: Tightly-coupled LiDAR inertial odometry via smoothing and mapping,

    T. Shan, B. Englot, D. Meyers, W. Wang, C. Ratti, and D. Rus, “LIO-SAM: Tightly-coupled LiDAR inertial odometry via smoothing and mapping,” in Proceedings of the IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), 2020, pp. 5135–5142

  5. [5]

    A method for registration of 3-D shapes,

    P. J. Besl and N. D. McKay, “A method for registration of 3-D shapes,”IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 14, no. 2, pp. 239–256, 1992

  6. [6]

    The normal distributions transform: A new approach to laser scan matching,

    P. Biber and W. Straßer, “The normal distributions transform: A new approach to laser scan matching,” inProceedings of the IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), 2003, pp. 2743–2748

  7. [7]

    Lidar base specification (ver. 1.3, february 2018),

    H. K. Heidemann, “Lidar base specification (ver. 1.3, february 2018),” U.S. Geological Survey, Techniques and Methods 11-B4, 2018, 3DEP quality levels; QL2 vertical accuracy≤10 cm RMSEz in non-vegetated terrain

  8. [8]

    Unmanned ground vehicle navigation using aerial ladar data,

    N. Vandapel, R. Donamukkala, and M. Hebert, “Unmanned ground vehicle navigation using aerial ladar data,”The International Journal of Robotics Research, vol. 25, no. 1, pp. 31–51, 2006

  9. [9]

    Temporal range registration for un- manned ground and aerial vehicles,

    R. Madhavan, T. Hong, and E. Messina, “Temporal range registration for un- manned ground and aerial vehicles,”Journal of Intelligent and Robotic Systems, vol. 44, no. 1, pp. 47–69, 2005

  10. [10]

    Surface-based registration of airborne and terrestrial mobile LiDAR point clouds,

    T.-A. Teo and S.-H. Huang, “Surface-based registration of airborne and terrestrial mobile LiDAR point clouds,”Remote Sensing, vol. 6, no. 12, pp. 12 686–12 707, 2014

  11. [11]

    An automated method to register airborne and terrestrial laser scanning point clouds,

    B. Yang, Y . Zang, Z. Dong, and R. Huang, “An automated method to register airborne and terrestrial laser scanning point clouds,”ISPRS Journal of Pho- togrammetry and Remote Sensing, vol. 109, pp. 62–76, 2015

  12. [12]

    The trimmed iterative closest point algorithm,

    D. Chetverikov, D. Svirko, D. Stepanov, and P. Krsek, “The trimmed iterative closest point algorithm,” inProceedings of the 16th International Conference on Pattern Recognition (ICPR), 2002, pp. 545–548

  13. [13]

    Geometric transformer for fast and robust point cloud registration,

    Z. Qin, H. Yu, C. Wang, Y . Guo, Y . Peng, and K. Xu, “Geometric transformer for fast and robust point cloud registration,” inProceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), 2022, pp. 11 143–11 152

  14. [14]

    BUFFER-X: Towards zero- shot point cloud registration in diverse scenes,

    M. Seo, H. Lim, K. Lee, L. Carlone, and J. Park, “BUFFER-X: Towards zero- shot point cloud registration in diverse scenes,” inProceedings of the IEEE/CVF International Conference on Computer Vision (ICCV), Oct. 2025, pp. 3851–3862

  15. [15]

    CrossLoc3D: Aerial-ground cross-source 3D place recognition,

    T. Guan, A. Muthuselvam, M. Hoover, X. Wang, J. Liang, A. J. Sathyamoorthy, D. Conover, and D. Manocha, “CrossLoc3D: Aerial-ground cross-source 3D place recognition,” inProceedings of the IEEE/CVF International Conference on Computer Vision (ICCV), Oct. 2023, pp. 11 335–11 344

  16. [16]

    HOT- FormerLoc: Hierarchical octree transformer for versatile LiDAR place recogni- tion across ground and aerial views,

    E. Griffiths, M. Haghighat, S. Denman, C. Fookes, and M. Ramezani, “HOT- FormerLoc: Hierarchical octree transformer for versatile LiDAR place recogni- tion across ground and aerial views,” inProceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), June 2025, pp. 6648–6658

  17. [17]

    Fast point feature histograms (FPFH) for 3D registration,

    R. B. Rusu, N. Blodow, and M. Beetz, “Fast point feature histograms (FPFH) for 3D registration,” inProceedings of the IEEE International Conference on Robotics and Automation (ICRA), 2009, pp. 3212–3217

  18. [18]

    Random sample consensus: A paradigm for model fitting with applications to image analysis and automated cartography,

    M. A. Fischler and R. C. Bolles, “Random sample consensus: A paradigm for model fitting with applications to image analysis and automated cartography,” Communications of the ACM, vol. 24, no. 6, pp. 381–395, 1981

  19. [19]

    Generalized-ICP,

    A. V . Segal, D. Haehnel, and S. Thrun, “Generalized-ICP,” inRobotics: Science and Systems (RSS), 2009

  20. [20]

    EOE: Expected overlap estimation over unstructured point cloud data,

    B. Eckart, K. Kim, and J. Kautz, “EOE: Expected overlap estimation over unstructured point cloud data,” inProceedings of the International Conference on 3D Vision (3DV), 2018, pp. 747–755

  21. [21]

    Robust low-overlap 3-D point cloud registration for outlier rejection,

    J. Stechschulte, N. R. Ahmed, and C. Heckman, “Robust low-overlap 3-D point cloud registration for outlier rejection,” inProceedings of the IEEE International Conference on Robotics and Automation (ICRA), 2019, pp. 7143–7149

  22. [22]

    A symmetric objective function for ICP,

    S. Rusinkiewicz, “A symmetric objective function for ICP,”ACM Transactions on Graphics (Proc. SIGGRAPH), vol. 38, no. 4, pp. 85:1–85:7, 2019

  23. [23]

    Super 4PCS: Fast global pointcloud registration via smart indexing,

    N. Mellado, D. Aiger, and N. J. Mitra, “Super 4PCS: Fast global pointcloud registration via smart indexing,”Computer Graphics Forum, vol. 33, no. 5, pp. 205–215, 2014

  24. [24]

    Fast global registration,

    Q.-Y . Zhou, J. Park, and V . Koltun, “Fast global registration,” inProceedings of the European Conference on Computer Vision (ECCV), 2016, pp. 766–782

  25. [25]

    TEASER: Fast and certifiable point cloud registration,

    H. Yang, J. Shi, and L. Carlone, “TEASER: Fast and certifiable point cloud registration,”IEEE Transactions on Robotics, vol. 37, no. 2, pp. 314–333, 2021, open-source implementation commonly referred to as TEASER++

  26. [26]

    Real-time correlative scan matching,

    E. B. Olson, “Real-time correlative scan matching,” inProceedings of the IEEE International Conference on Robotics and Automation (ICRA), 2009, pp. 4387– 4393

  27. [27]

    An FFT-based technique for translation, rotation, and scale-invariant image registration,

    B. S. Reddy and B. N. Chatterji, “An FFT-based technique for translation, rotation, and scale-invariant image registration,”IEEE Transactions on Image Processing, vol. 5, no. 8, pp. 1266–1271, 1996

  28. [28]

    3DMatch: Learning local geometric descriptors from RGB-D reconstructions,

    A. Zeng, S. Song, M. Nießner, M. Fisher, J. Xiao, and T. Funkhouser, “3DMatch: Learning local geometric descriptors from RGB-D reconstructions,” inPro- ceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR), 2017, pp. 1802–1811

  29. [29]

    Cross-PCR: A robust cross-source point cloud registration framework,

    G. Zhao, Z. Guo, Z. Du, and H. Ma, “Cross-PCR: A robust cross-source point cloud registration framework,”Proceedings of the AAAI Conference on Artificial Intelligence, vol. 39, no. 10, pp. 10 403–10 411, 2025

  30. [30]

    SPEAL: Skeletal prior embedded attention learning for cross-source point cloud registration,

    K. Xiong, M. Zheng, Q. Xu, C. Wen, S. Shen, and C. Wang, “SPEAL: Skeletal prior embedded attention learning for cross-source point cloud registration,” Proceedings of the AAAI Conference on Artificial Intelligence, vol. 38, no. 6, pp. 6279–6287, 2024

  31. [31]

    HOTFLoc++: End-to-End Hierarchical LiDAR Place Recognition, Re-Ranking, and 6-DoF Metric Localisation in Forests

    E. Griffiths, M. Haghighat, S. Denman, C. Fookes, and M. Ramezani, “HOT- FLoc++: End-to-end hierarchical LiDAR place recognition, re-ranking, and 6- DoF metric localisation in forests,” 2025, arXiv:2511.09170 [cs.CV]

  32. [32]

    GND: Global navigation dataset with multi-modal perception and multi-category traversability in outdoor campus environments,

    J. Liang, D. Das, D. Song, M. N. H. Shuvo, M. Durrani, K. Taranath, I. Penskiy, D. Manocha, and X. Xiao, “GND: Global navigation dataset with multi-modal perception and multi-category traversability in outdoor campus environments,” in Proceedings of the IEEE International Conference on Robotics and Automation (ICRA), May 2025, pp. 2383–2390

  33. [33]

    Are we ready for autonomous driving? The KITTI Vision Benchmark Suite,

    A. Geiger, P. Lenz, and R. Urtasun, “Are we ready for autonomous driving? The KITTI Vision Benchmark Suite,” inProceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR), 2012, pp. 3354–3361

  34. [34]

    University of Michigan North Campus long-term vision and LiDAR dataset,

    N. Carlevaris-Bianco, A. K. Ushani, and R. M. Eustice, “University of Michigan North Campus long-term vision and LiDAR dataset,”International Journal of Robotics Research, vol. 35, no. 9, pp. 1023–1035, 2016

  35. [35]

    MulRan: Multimodal range dataset for urban place recognition,

    G. Kim, Y . S. Park, Y . Cho, J. Jeong, and A. Kim, “MulRan: Multimodal range dataset for urban place recognition,” inProceedings of the IEEE International Conference on Robotics and Automation (ICRA), 2020, pp. 6246–6253

  36. [36]

    AGC-Drive: A large-scale dataset for real-world aerial- ground collaboration in driving scenarios,

    Y . Hou, B. Zou, M. Zhang, R. Chen, S. Yang, Y . Zhang, J. Zhuo, S. Chen, J. Chen, and H. Ma, “AGC-Drive: A large-scale dataset for real-world aerial- ground collaboration in driving scenarios,” inAdvances in Neural Information Processing Systems (NeurIPS), Dec. 2025, Datasets and Benchmarks Track. [Online]. Available: https://openreview.net/forum?id=N07WGSPh9l

  37. [37]

    GRACO: A multimodal dataset for ground and aerial cooperative localization and mapping,

    Y . Zhu, Y . Kong, Y . Jie, S. Xu, and H. Cheng, “GRACO: A multimodal dataset for ground and aerial cooperative localization and mapping,”IEEE Robotics and Automation Letters, vol. 8, no. 2, pp. 966–973, 2023

  38. [38]

    SC-LIO-SAM: LiDAR odometry using scan context loop closure,

    G. Kim, “SC-LIO-SAM: LiDAR odometry using scan context loop closure,” GitHub repository, 2021, accessed: 2024-12-01. [Online]. Available: https: //github.com/gisbi-kim/SC-LIO-SAM

  39. [39]

    Maryland iMAP LiDAR overview,

    Maryland iMAP, “Maryland iMAP LiDAR overview,” https://imap.maryland. gov/pages/lidar-overview, 2024, accessed 2024

  40. [40]

    Virginia LiDAR download application,

    Virginia Geographic Information Network (VGIN), “Virginia LiDAR download application,” https://vgin.vdem.virginia.gov/datasets/ virginia-lidar-download-application, 2024, accessed 2024

  41. [41]

    Open3D: A Modern Library for 3D Data Processing

    Q.-Y . Zhou, J. Park, and V . Koltun, “Open3D: A modern library for 3D data processing,” 2018, arXiv:1801.09847 [cs.CV]