pith. machine review for the scientific record. sign in

arxiv: 2605.03678 · v1 · submitted 2026-05-05 · 💻 cs.RO

Recognition: unknown

Robust Visual SLAM for UAV Navigation in GPS-Denied and Degraded Environments: A Multi-Paradigm Evaluation and Deployment Study

Akshay Deepak, Prasoon Kumar, Sandeep Kumar

Authors on Pith no claims yet

Pith reviewed 2026-05-07 15:38 UTC · model grok-4.3

classification 💻 cs.RO
keywords visual SLAMUAV navigationGPS-denieddegraded environmentslearning-based methodstracking successembedded deployment
0
0 comments X

The pith

Learning-based visual SLAM outperforms classical methods in degraded UAV environments

A machine-rendered reading of the paper's core claim, the machinery that carries it, and where it could break.

The paper evaluates five visual SLAM systems spanning classical, deep learning, recurrent, and vision transformer paradigms to assess their performance for UAV localization in GPS-denied settings with visual degradation. It applies controlled conditions of low light, dust haze, motion blur, and their combination to sequences from public benchmarks and a custom dataset, using precise Vicon ground truth. Results indicate that classical ORB-SLAM3 experiences critical failures under degradation while learning-based methods like MASt3R and DUSt3R sustain higher accuracy and tracking success. This evaluation provides practical insights for selecting SLAM systems that ensure reliable autonomous operations in challenging real-world conditions.

Core claim

The central claim is that learning-based V-SLAM systems exhibit greater robustness to visual degradations than classical methods, evidenced by MASt3R achieving the lowest degraded absolute trajectory error of 0.027 m and DUSt3R the highest tracking success rate of 96.5%, with DPVO offering the best efficiency-robustness trade-off at 18.6 FPS, 3.1 GB GPU memory, and 86.1% tracking success rate, supported by embedded deployment analysis on NVIDIA Jetson platforms.

What carries the argument

Comparative evaluation of V-SLAM systems under five controlled degradation conditions on benchmark and custom datasets with sub-millimeter ground truth

Load-bearing premise

The five controlled degradation conditions of low light, dust haze, motion blur, and combined sufficiently represent real-world visual challenges for UAVs in GPS-denied environments.

What would settle it

Demonstrating that MASt3R or DUSt3R experiences tracking failure rates similar to ORB-SLAM3 in a real UAV flight under dense haze and motion blur would falsify the robustness superiority of learning-based methods.

Figures

Figures reproduced from arXiv: 2605.03678 by Akshay Deepak, Prasoon Kumar, Sandeep Kumar.

Figure 1
Figure 1. Figure 1: Visual SLAM Evaluation for UAV Navigation in Degraded Environments view at source ↗
Figure 2
Figure 2. Figure 2: Unified V-SLAM benchmark pipeline. Multi-modal sensor inputs (monocular camera, optional depth, IMU) feed a front-end view at source ↗
Figure 3
Figure 3. Figure 3: Overview of the experimental system architecture. The setup includes cloud-based GPU servers (RunPod with RTX 3090 GPUs), view at source ↗
Figure 4
Figure 4. Figure 4: Tracking success rate (TSR) over time under different degradation conditions: (a) Normal, (b) Low light, (c) Dust haze, (d) Motion view at source ↗
Figure 5
Figure 5. Figure 5: Comparative trajectory overlay for all five systems on a custom combined degradation sequence. Ground truth (black, thick), ORB view at source ↗
Figure 6
Figure 6. Figure 6: Latency vs Power trade-off across platforms. Lower is better. view at source ↗
read the original abstract

Reliable localization in GPS-denied, visually degraded environments is critical for autonomous UAV opera- tions. This paper presents a systematic comparative evaluation of five V-SLAM systems ORB-SLAM3, DPVO, DROID-SLAM, DUSt3R, and MASt3R spanning classical, deep learning, recurrent, and Vision Transformer (ViT) paradigms. Experiments are conducted on curated sequences from four public benchmarks (TUM RGB-D, EuRoC MAV, UMA-VI, SubT-MRS) and a custom monocular indoor dataset under five controlled degradation conditions (normal, low light, dust haze, motion blur, and combined), with sub-millimeter Vicon ground truth. Results show that ORB-SLAM3 fails critically under severe degradation (62.4% overall TSR; 0% under dense haze), while learning-based methods remain robust: MASt3R achieves the lowest degraded ATE (0.027 m) and DUSt3R the highest tracking success (96.5%). DPVO offers the best efficiency robustness trade-off (18.6 FPS, 3.1 GB GPU memory, 86.1% TSR), making it the preferred choice for memory-constrained embedded platforms. Embedded deployment analysis across NVIDIA Jetson platforms provides actionable guidelines for SLAM selection under SWaP-constrained UAV scenarios.

Editorial analysis

A structured set of objections, weighed in public.

Desk editor's note, referee report, simulated authors' rebuttal, and a circularity audit. Tearing a paper down is the easy half of reading it; the pith above is the substance, this is the friction.

Referee Report

2 major / 1 minor

Summary. The paper claims that learning-based V-SLAM methods outperform classical ones like ORB-SLAM3 (which shows 62.4% overall TSR and 0% under dense haze) in GPS-denied, visually degraded UAV environments. It reports MASt3R with the lowest degraded ATE (0.027 m), DUSt3R with the highest TSR (96.5%), and DPVO with the best efficiency-robustness trade-off (18.6 FPS, 3.1 GB GPU memory, 86.1% TSR) based on evaluations across ORB-SLAM3, DPVO, DROID-SLAM, DUSt3R, and MASt3R on curated sequences from TUM RGB-D, EuRoC MAV, UMA-VI, SubT-MRS, and a custom monocular indoor dataset with Vicon ground truth under five controlled degradation conditions (normal, low light, dust haze, motion blur, combined). The work also provides Jetson platform deployment analysis for SWaP-constrained scenarios.

Significance. If the results hold, the study supplies concrete empirical guidance for V-SLAM selection in GPS-denied UAV navigation, crediting its use of public benchmarks plus custom data with sub-millimeter Vicon ground truth, reported ATE/TSR/FPS/memory metrics, and embedded deployment analysis. This could inform practical algorithm choices under visual degradation, though significance is tempered by questions of how well the controlled conditions generalize.

major comments (2)
  1. [Degradation protocols section] Degradation protocols section: The five controlled conditions (low light, dust haze, motion blur, combined) are applied to benchmark sequences, but the manuscript provides no evidence or analysis showing these adequately model real UAV-specific factors such as vibration-induced rolling shutter, dynamic scene elements, variable wind-driven motion, or compound degradations (e.g., haze + low light + textureless surfaces). This is load-bearing for the central robustness claims and Jetson deployment guidelines, as the reported ATE/TSR gaps may not persist under unmodeled conditions.
  2. [Results and evaluation section] Results and evaluation section: The superiority claims (e.g., MASt3R lowest degraded ATE of 0.027 m, DUSt3R 96.5% TSR, DPVO 86.1% TSR) and efficiency trade-offs are presented as averages without reported variance, statistical significance testing, or explicit details on sequence selection and data exclusion rules across the public benchmarks and custom dataset. This undermines confidence in the cross-method comparisons and the recommendation of DPVO for embedded platforms.
minor comments (1)
  1. [Abstract] The abstract lists specific numerical results but omits the total number of sequences or trials per condition, which would aid in assessing the scale and reliability of the metrics.

Simulated Author's Rebuttal

2 responses · 0 unresolved

We thank the referee for the constructive comments on our manuscript. These points highlight important aspects of experimental design and statistical presentation that we will address to strengthen the work. We respond to each major comment below.

read point-by-point responses
  1. Referee: [Degradation protocols section] Degradation protocols section: The five controlled conditions (low light, dust haze, motion blur, combined) are applied to benchmark sequences, but the manuscript provides no evidence or analysis showing these adequately model real UAV-specific factors such as vibration-induced rolling shutter, dynamic scene elements, variable wind-driven motion, or compound degradations (e.g., haze + low light + textureless surfaces). This is load-bearing for the central robustness claims and Jetson deployment guidelines, as the reported ATE/TSR gaps may not persist under unmodeled conditions.

    Authors: We acknowledge that our synthetically applied degradations on benchmark sequences do not fully replicate all real UAV operational factors, including vibration-induced rolling shutter, wind-driven motion variability, or certain compound degradations. The selected benchmarks (EuRoC MAV, SubT-MRS) incorporate UAV-relevant dynamics and environments, and the controlled conditions enable reproducible isolation of visual effects across methods. We will add a limitations subsection in the revised manuscript that explicitly discusses these gaps, their potential impact on generalizability, and directions for future real-flight validation. This contextualizes the robustness claims without altering the reported comparative results. revision: partial

  2. Referee: [Results and evaluation section] Results and evaluation section: The superiority claims (e.g., MASt3R lowest degraded ATE of 0.027 m, DUSt3R 96.5% TSR, DPVO 86.1% TSR) and efficiency trade-offs are presented as averages without reported variance, statistical significance testing, or explicit details on sequence selection and data exclusion rules across the public benchmarks and custom dataset. This undermines confidence in the cross-method comparisons and the recommendation of DPVO for embedded platforms.

    Authors: We agree that reporting only averages limits interpretability. In the revision we will add standard deviations for all ATE and TSR metrics across sequences, provide a clear description of sequence selection criteria and any exclusion rules (e.g., minimum track length or failure thresholds), and include basic statistical significance tests (paired t-tests on per-sequence metrics) to support the observed differences. These changes will be incorporated directly into the results and evaluation sections. revision: yes

Circularity Check

0 steps flagged

Purely empirical evaluation with no derivations or self-referential predictions

full rationale

The manuscript is a comparative benchmark study of five V-SLAM algorithms across public datasets and controlled synthetic degradations, reporting measured ATE, TSR, FPS, and memory metrics against external Vicon ground truth. No equations, parameter fitting, uniqueness theorems, or ansatzes are invoked; all performance claims are direct observations from experiments. The central robustness conclusions therefore rest on external data rather than any internal reduction or self-citation chain, satisfying the self-contained criterion.

Axiom & Free-Parameter Ledger

0 free parameters · 0 axioms · 0 invented entities

Empirical benchmarking study with no mathematical derivations, free parameters, or new theoretical constructs.

pith-pipeline@v0.9.0 · 5560 in / 1044 out tokens · 39332 ms · 2026-05-07T15:38:16.678073+00:00 · methodology

discussion (0)

Sign in with ORCID, Apple, or X to comment. Anyone can read and Pith papers without signing in.

Reference graph

Works this paper leans on

42 extracted references · 1 canonical work pages · 1 internal anchor

  1. [1]

    Autonomous navigation in GPS-denied environments: Technology gaps and research priorities,

    N. Science and T. Organization, “Autonomous navigation in GPS-denied environments: Technology gaps and research priorities,” NATO STO, Brussels, Belgium, Tech. Rep. TR-IST-180, 2023

  2. [2]

    ORB-SLAM3: An accurate open-source library for visual, visual–inertial, and multimap SLAM,

    C. Campos, R. Elvira, J. J. G ´omez Rodr´ıguez, J. M. M. Montiel, and J. D. Tard ´os, “ORB-SLAM3: An accurate open-source library for visual, visual–inertial, and multimap SLAM,”IEEE Trans. Robot., vol. 37, no. 6, pp. 1874–1890, Dec. 2021

  3. [3]

    Dpvo: Deep patch visual odometry,

    Z. Teed and J. Deng, “Dpvo: Deep patch visual odometry,” inAdvances in Neural Information Processing Systems (NeurIPS), 2023

  4. [4]

    DROID-SLAM: Deep visual SLAM for monocular, stereo, and RGB-D cameras,

    ——, “DROID-SLAM: Deep visual SLAM for monocular, stereo, and RGB-D cameras,” inAdv. Neural Inf. Process. Syst. (NeurIPS), vol. 34, 2021, pp. 16 558–16 569

  5. [5]

    DUSt3R: Geometric 3d vision made easy,

    S. Wang, V . Leroy, Y . Cabon, B. Chidlovskii, and J. Revaud, “DUSt3R: Geometric 3d vision made easy,” inProc. IEEE/CVF Conf. Comput. Vis. Pattern Recognit. (CVPR), Seattle, W A, USA, Jun. 2024, pp. 20 697–20 709

  6. [6]

    Grounding image matching in 3d with MASt3R,

    V . Leroy, Y . Cabon, and J. Revaud, “Grounding image matching in 3d with MASt3R,” inProc. Eur. Conf. Comput. Vis. (ECCV), Milan, Italy, sep–oct 2024

  7. [7]

    A benchmark for the evaluation of RGB-D SLAM systems,

    J. Sturm, N. Engelhard, F. Endres, W. Burgard, and D. Cremers, “A benchmark for the evaluation of RGB-D SLAM systems,” inProc. IEEE/RSJ Int. Conf. Intell. Robots Syst. (IROS), Vilamoura, Portugal, Oct. 2012, pp. 573–580

  8. [8]

    The EuRoC micro aerial vehicle datasets,

    M. Burri, J. Nikolic, P. Gohl, T. Schneider, J. Rehder, S. Omari, M. W. Achtelik, and R. Siegwart, “The EuRoC micro aerial vehicle datasets,”Int. J. Robot. Res., vol. 35, no. 10, pp. 1157–1163, Sep. 2016

  9. [9]

    Are we ready for autonomous driving? the KITTI vision benchmark suite,

    A. Geiger, P. Lenz, and R. Urtasun, “Are we ready for autonomous driving? the KITTI vision benchmark suite,” inProc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), Providence, RI, USA, Jun. 2012, pp. 3354–3361

  10. [10]

    Netvlad: Cnn architecture for weakly supervised place recognition,

    R. Arandjelovi ´c, P. Gronat, A. Torii, T. Pajdla, and J. Sivic, “Netvlad: Cnn architecture for weakly supervised place recognition,” in Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR), 2016, pp. 5297–5307

  11. [11]

    Fine-tuning cnn image retrieval with no human annotation,

    F. Radenovi ´c, G. Tolias, and O. Chum, “Fine-tuning cnn image retrieval with no human annotation,” inIEEE Transactions on Pattern Analysis and Machine Intelligence (TPAMI), 2018

  12. [12]

    DINOv2: Learning Robust Visual Features without Supervision

    M. Oquab, T. Darcet, T. Moutakanni, H. V . V o, M. Szafraniec, V . Khalidov, P. Fernandez, D. Haziza, F. Massa, A. El-Noubyet al., “Dinov2: Learning robust visual features without supervision,”arXiv preprint arXiv:2304.07193, 2023

  13. [13]

    Bags of binary words for fast place recognition in image sequences,

    D. G ´alvez-L´opez and J. D. Tard ´os, “Bags of binary words for fast place recognition in image sequences,”IEEE Trans. Robot., vol. 28, no. 5, pp. 1188–1197, Oct. 2012

  14. [14]

    RAFT: Recurrent all-pairs field transforms for optical flow,

    Z. Teed and J. Deng, “RAFT: Recurrent all-pairs field transforms for optical flow,” inProc. Eur. Conf. Comput. Vis. (ECCV), Glasgow, UK (Virtual), Aug. 2020, pp. 402–419

  15. [15]

    MonoSLAM: Real-time single camera SLAM,

    A. J. Davison, I. D. Reid, N. D. Molton, and O. Stasse, “MonoSLAM: Real-time single camera SLAM,”IEEE Trans. Pattern Anal. Mach. Intell., vol. 29, no. 6, pp. 1052–1067, Jun. 2007

  16. [16]

    Parallel tracking and mapping for small AR workspaces,

    G. Klein and D. Murray, “Parallel tracking and mapping for small AR workspaces,” inProc. IEEE/ACM Int. Symp. Mixed Augmented Real. (ISMAR), Nara, Japan, Nov. 2007, pp. 225–234

  17. [17]

    ORB-SLAM: A versatile and accurate monocular SLAM system,

    R. Mur-Artal, J. M. M. Montiel, and J. D. Tard ´os, “ORB-SLAM: A versatile and accurate monocular SLAM system,”IEEE Trans. Robot., vol. 31, no. 5, pp. 1147–1163, Oct. 2015

  18. [18]

    ORB-SLAM2: An open-source SLAM system for monocular, stereo, and RGB-D cameras,

    R. Mur-Artal and J. D. Tard ´os, “ORB-SLAM2: An open-source SLAM system for monocular, stereo, and RGB-D cameras,”IEEE Trans. Robot., vol. 33, no. 5, pp. 1255–1262, Oct. 2017. 24

  19. [19]

    ORB: An efficient alternative to SIFT or SURF,

    E. Rublee, V . Rabaud, K. Konolige, and G. Bradski, “ORB: An efficient alternative to SIFT or SURF,” inProc. IEEE Int. Conf. Comput. Vis. (ICCV), Barcelona, Spain, Nov. 2011, pp. 2564–2571

  20. [20]

    Robust visual SLAM with point and line features,

    X. Zuo, X. Xie, Y . Liu, and G. Huang, “Robust visual SLAM with point and line features,” inProc. IEEE/RSJ Int. Conf. Intell. Robots Syst. (IROS), Vancouver, BC, Canada, Sep. 2017, pp. 1775–1782

  21. [21]

    Direct sparse odometry,

    J. Engel, V . Koltun, and D. Cremers, “Direct sparse odometry,”IEEE Trans. Pattern Anal. Mach. Intell., vol. 40, no. 3, pp. 611–625, Mar. 2018

  22. [22]

    LSD-SLAM: Large-scale direct monocular SLAM,

    J. Engel, T. Sch ¨ops, and D. Cremers, “LSD-SLAM: Large-scale direct monocular SLAM,” inProc. Eur. Conf. Comput. Vis. (ECCV), Zurich, Switzerland, Sep. 2014, pp. 834–849

  23. [23]

    Cs231n: Convolutional neural networks for visual recognition,

    A. Karpathy, “Cs231n: Convolutional neural networks for visual recognition,”http://cs231n.stanford.edu/, 2016, stanford University Course Notes

  24. [24]

    DVI-SLAM: A dual visual inertial SLAM network,

    X. Peng, Z. Liu, W. Li, P. Tan, S. Cho, and Q. Wang, “DVI-SLAM: A dual visual inertial SLAM network,” inProc. IEEE Int. Conf. Robot. Autom. (ICRA), Yokohama, Japan, May 2024, pp. 12 020–12 026

  25. [25]

    ScanNet: Richly-annotated 3d reconstructions of indoor scenes,

    A. Dai, A. X. Chang, M. Savva, M. Halber, T. Funkhouser, and M. Nießner, “ScanNet: Richly-annotated 3d reconstructions of indoor scenes,” inProc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), Honolulu, HI, USA, Jul. 2017, pp. 5828–5839

  26. [26]

    Matterport3D: Learning from RGB-D data in indoor environments,

    A. Chang, A. Dai, T. Funkhouser, M. Halber, M. Nießner, M. Savva, S. Song, A. Zeng, and Y . Zhang, “Matterport3D: Learning from RGB-D data in indoor environments,” inProc. Int. Conf. 3D Vis. (3DV), Qingdao, China, Oct. 2017, pp. 667–676

  27. [27]

    Adaptive histogram equalization and its variations,

    S. M. Pizer, E. P. Amburn, J. D. Austin, R. Cromartie, A. Geselowitz, T. Greer, B. H. ter Haar Romeny, J. B. Zimmerman, and K. Zuiderveld, “Adaptive histogram equalization and its variations,”Comput. Vis. Graph. Image Process., vol. 39, no. 3, pp. 355–368, Sep. 1987

  28. [28]

    The Retinex theory of color vision,

    E. H. Land, “The Retinex theory of color vision,”Sci. Am., vol. 237, no. 6, pp. 108–128, Dec. 1977

  29. [29]

    Beyond a Gaussian denoiser: Residual learning of deep CNN for image denoising,

    K. Zhang, W. Zuo, Y . Chen, D. Meng, and L. Zhang, “Beyond a Gaussian denoiser: Residual learning of deep CNN for image denoising,”IEEE Trans. Image Process., vol. 26, no. 7, pp. 3142–3155, Jul. 2017

  30. [30]

    Incremental visual-inertial 3d mesh generation with structural regularities,

    Y . He, B. Zhao, Y . Guo, and H. Zha, “Incremental visual-inertial 3d mesh generation with structural regularities,” inProc. IEEE Int. Conf. Robot. Autom. (ICRA), Montreal, QC, Canada, May 2019, pp. 7323–7330

  31. [31]

    The multivehicle stereo event camera dataset: An event camera dataset for 3d perception,

    A. Z. Zhu, D. Thakur, T. ¨Ozaslan, B. Pfrommer, V . Kumar, and K. Daniilidis, “The multivehicle stereo event camera dataset: An event camera dataset for 3d perception,”IEEE Robot. Autom. Lett., vol. 3, no. 3, pp. 2032–2039, Jul. 2018

  32. [32]

    Event-based visual/inertial odometry for UA V indoor navigation,

    A. Elamin, A. El-Rabbany, and S. Jacob, “Event-based visual/inertial odometry for UA V indoor navigation,”Sensors, vol. 25, no. 1, p. 61, Jan. 2025

  33. [33]

    Complementary multi-modal sensor fusion for resilient robot pose estimation in subterranean environments,

    S. Khattak, H. Nguyen, F. Mascarich, T. Dang, and K. Alexis, “Complementary multi-modal sensor fusion for resilient robot pose estimation in subterranean environments,” inProc. Int. Conf. Unmanned Aircr. Syst. (ICUAS), Athens, Greece (Virtual), Sep. 2020, pp. 1024–1031

  34. [34]

    g2o: A general framework for graph optimization,

    R. K ¨ummerle, G. Grisetti, H. Strasdat, K. Konolige, and W. Burgard, “g2o: A general framework for graph optimization,” inProc. IEEE Int. Conf. Robot. Autom. (ICRA), Shanghai, China, May 2011, pp. 3607–3613

  35. [35]

    The UMA-VI dataset: Visual–inertial odometry in low-textured and dynamic illumination environments,

    D. Zu ˜niga-No¨el, F. Moreno-Noguer, and J. Gonz ´alez-Jim´enez, “The UMA-VI dataset: Visual–inertial odometry in low-textured and dynamic illumination environments,”Int. J. Robot. Res., vol. 39, no. 9, pp. 1047–1064, Aug. 2020

  36. [36]

    SubT-MRS dataset: Pushing SLAM towards all-weather environments,

    S. Zhao, W. Zhang, C. Fu, M. Li, C. Wang, S. Li, D. Zhu, H. Li, P. Xu, and C. Cao, “SubT-MRS dataset: Pushing SLAM towards all-weather environments,” inProc. IEEE/CVF Conf. Comput. Vis. Pattern Recognit. (CVPR), Seattle, W A, USA, Jun. 2024, pp. 22 647– 22 657

  37. [37]

    TartanAir: A dataset to push the limits of visual SLAM,

    W. Wang, D. Zhu, X. Wang, Y . Hu, Y . Qiu, C. Wang, Y . Hu, A. Kapoor, and S. Scherer, “TartanAir: A dataset to push the limits of visual SLAM,” inProc. IEEE/RSJ Int. Conf. Intell. Robots Syst. (IROS), Las Vegas, NV , USA (Virtual), Oct. 2020, pp. 4909–4916

  38. [38]

    evo: A Python package for the evaluation of odometry and SLAM,

    M. Grupp, “evo: A Python package for the evaluation of odometry and SLAM,” GitHub, 2017. [Online]. Available: https://github.com/MichaelGrupp/evo

  39. [39]

    Runpod documentation: Cloud gpu platform for ai/ml workloads,

    RunPod, Inc., “Runpod documentation: Cloud gpu platform for ai/ml workloads,” Online, accessed: 2025-04-26. [Online]. Available: https://www.runpod.io/

  40. [40]

    TensorRT developer guide,

    NVIDIA Corporation, “TensorRT developer guide,” NVIDIA Documentation, 2024. [Online]. Available: https://docs.nvidia.com/ deeplearning/tensorrt/developer-guide/

  41. [41]

    ROVER: A multi-season dataset for visual SLAM,

    F. Schmidt, J. Daubermann, M. Mitschke, C. Blessing, S. Meyer, M. Enzweiler, and A. Valada, “ROVER: A multi-season dataset for visual SLAM,”IEEE Trans. Robot., vol. 41, pp. 4005–4022, 2025

  42. [42]

    The event-camera dataset and simulator: Event-based data for pose estimation, visual odometry, and SLAM,

    E. Mueggler, H. Rebecq, G. Gallego, T. Delbr ¨uck, and D. Scaramuzza, “The event-camera dataset and simulator: Event-based data for pose estimation, visual odometry, and SLAM,”Int. J. Robot. Res., vol. 36, no. 2, pp. 142–149, Feb. 2017