Recognition: 2 theorem links
· Lean TheoremTinyDEVO: Deep Event-based Visual Odometry on Ultra-low-power Multi-core Microcontrollers
Pith reviewed 2026-05-10 18:02 UTC · model grok-4.3
The pith
A compact deep model enables event-based visual odometry to run on ultra-low-power microcontrollers at 1.2 frames per second.
A machine-rendered reading of the paper's core claim, the machinery that carries it, and where it could break.
Core claim
We present TinyDEVO, an event-based VO deep learning model designed for resource-constrained microcontroller units (MCUs). We deploy TinyDEVO on an ultra-low-power (ULP) 9-core RISC-V-based MCU, achieving a throughput of approximately 1.2 frames per second with an average power consumption of only 86 mW. Thanks to our neural network architectural optimizations and hyperparameter tuning, TinyDEVO reduces the memory footprint by 11.5x (to 63.8 MB) and the number of operations per frame by 29.7x (to 5.2 billion MACs per frame) compared to DEVO, while maintaining an average trajectory error of 27 cm, i.e., only 19 cm higher than DEVO, on three state-of-the-art datasets. Our work demonstrates, in
What carries the argument
TinyDEVO, the deep neural network for event-based visual odometry whose architecture and hyperparameters have been tuned to reduce memory and computation while preserving usable accuracy on microcontrollers.
If this is right
- Event-based visual odometry becomes practical for battery-powered robots and drones that cannot carry large processors.
- Wearable augmented-reality devices can add low-power motion tracking without draining batteries quickly.
- Embedded systems can now combine event cameras with other sensors for navigation at total power budgets under 100 mW.
- The same shrinking technique could be applied to other vision models to move them from servers to edge hardware.
Where Pith is reading between the lines
- Designers of future microcontrollers could add dedicated event-processing accelerators to raise frame rate without increasing power.
- The 27 cm error level may still support coarse navigation tasks such as obstacle avoidance even if it is insufficient for precise mapping.
- Long-term deployments on solar-powered sensors become feasible if the 86 mW average draw matches available energy harvesting.
Load-bearing premise
The assumption that a 19 cm increase in average trajectory error remains acceptable for the intended applications and that the three evaluated datasets adequately represent real-world conditions.
What would settle it
Measuring trajectory error on a fourth independent dataset with different motion speeds, lighting, or scene types and finding it exceeds 27 cm on average.
Figures
read the original abstract
A key task in embedded vision is visual odometry (VO), which estimates camera motion from visual sensors, and it is a core component in many embedded power-constrained systems, from autonomous robots to augmented and virtual reality wearable devices. The newest class of VO systems combines deep learning models with bio-inspired event-based cameras, which are robust to motion blur and lighting conditions. However, state-of-the-art (SoA) event-based VO algorithms require significant memory and computation. For example, the leading approach DEVO requires 733 MB of memory and 155 billion multiply-accumulate (MAC) operations per frame. We present TinyDEVO, an event-based VO deep learning model designed for resource-constrained microcontroller units (MCUs). We deploy TinyDEVO on an ultra-low-power (ULP) 9-core RISC-V-based MCU, achieving a throughput of approximately 1.2 frames per second with an average power consumption of only 86 mW. Thanks to our neural network architectural optimizations and hyperparameter tuning, TinyDEVO reduces the memory footprint by 11.5x (to 63.8 MB) and the number of operations per frame by 29.7x (to 5.2 billion MACs per frame) compared to DEVO, while maintaining an average trajectory error of 27 cm, i.e., only 19 cm higher than DEVO, on three state-of-the-art datasets. Our work demonstrates, for the first time, the feasibility of an event-based VO pipeline on ultra-low-power devices.
Editorial analysis
A structured set of objections, weighed in public.
Referee Report
Summary. The paper proposes TinyDEVO, a compressed deep learning architecture for event-based visual odometry (VO) optimized for ultra-low-power multi-core microcontrollers. It achieves an 11.5× reduction in memory (to 63.8 MB) and 29.7× reduction in operations (to 5.2B MACs per frame) compared to DEVO, enabling real-time inference at ~1.2 FPS with 86 mW power on a 9-core RISC-V MCU. The model maintains an average trajectory error of 27 cm (19 cm higher than DEVO) on three datasets, claiming the first demonstration of event-based VO feasibility on such devices.
Significance. If the reported metrics are reproducible, this represents a meaningful step toward practical event-based VO on ultra-low-power hardware. The explicit hardware deployment results (power, throughput, memory footprint) and the scale of the compression relative to DEVO constitute concrete evidence of technical feasibility. Such work could enable new classes of always-on vision systems in battery-limited robots and wearables, where prior event-based deep VO pipelines were infeasible due to resource demands.
major comments (1)
- [Abstract] Abstract: The feasibility claim is predicated on the 27 cm average trajectory error (only 19 cm above DEVO) remaining usable for the stated target applications. However, the manuscript supplies no trajectory lengths, no per-dataset error statistics, no analysis of error accumulation over extended sequences, and no closed-loop control results. Without these, it is not possible to determine whether the observed error supports functional operation in robotics or AR/VR scenarios where drift rapidly becomes prohibitive.
minor comments (2)
- [Abstract] The abstract refers to 'three state-of-the-art datasets' without naming them; explicit dataset identifiers and references would aid reproducibility and allow readers to assess domain coverage.
- [Abstract] The power and throughput figures (86 mW, 1.2 FPS) are presented without stating the exact MCU part number, operating frequency, or measurement methodology; adding these details would strengthen the deployment claims.
Simulated Author's Rebuttal
We thank the referee for the constructive feedback on our manuscript. The comment raises a valid point about providing more context for the trajectory error metrics to better substantiate the feasibility claims for target applications. We address this below and indicate where revisions will be made.
read point-by-point responses
-
Referee: [Abstract] Abstract: The feasibility claim is predicated on the 27 cm average trajectory error (only 19 cm above DEVO) remaining usable for the stated target applications. However, the manuscript supplies no trajectory lengths, no per-dataset error statistics, no analysis of error accumulation over extended sequences, and no closed-loop control results. Without these, it is not possible to determine whether the observed error supports functional operation in robotics or AR/VR scenarios where drift rapidly becomes prohibitive.
Authors: We agree that additional details on the error evaluation would strengthen the presentation. The reported 27 cm average is computed across all sequences in the three datasets (with DEVO evaluated identically for direct comparison). In the revised manuscript, we will add the trajectory lengths for each sequence and per-dataset error breakdowns to the evaluation section. The average already incorporates performance over sequences of varying durations, providing an implicit view of accumulation; we will explicitly note this and discuss drift implications relative to DEVO. However, the work does not include closed-loop control experiments, as the focus is on open-loop odometry accuracy and MCU deployment feasibility. revision: partial
- Closed-loop control results, which lie outside the scope of the current open-loop visual odometry and hardware deployment study.
Circularity Check
No circularity: empirical measurements of optimized model on hardware
full rationale
The paper's central claim is the feasibility of an event-based VO pipeline on ULP MCUs, supported by direct empirical results: architectural optimizations and hyperparameter tuning applied to a deep learning model, followed by deployment measurements (1.2 fps, 86 mW, 63.8 MB memory, 5.2B MACs/frame) and trajectory error (27 cm average) on three datasets. These are compared against the prior DEVO baseline without any self-referential derivation, fitted-parameter renaming, or load-bearing self-citation chain. No equations or first-principles steps are presented that reduce to their own inputs by construction; all reported quantities are independent hardware and accuracy benchmarks.
Axiom & Free-Parameter Ledger
Reference graph
Works this paper leans on
-
[1]
In- vestigating the Benefit of FP16-Enabled Mixed-Precision Solvers for Symmetric Positive Definite Matrices Using GPUs, 2020
Ahmad Abdelfattah, Stan Tomov, and Jack Dongarra. In- vestigating the Benefit of FP16-Enabled Mixed-Precision Solvers for Symmetric Positive Definite Matrices Using GPUs, 2020. 7
2020
-
[2]
LynX: An Event-Based Gesture Dataset for Egocentric Interaction in Extended Reality
Pietro Bartoli, Varsha Jayaprakash, Julian Moosmann, Philipp Mayer, Franco Zappa, and Michele Magno. LynX: An Event-Based Gesture Dataset for Egocentric Interaction in Extended Reality. In2025 10th International Workshop on Advances in Sensors and Interfaces (IWASI), pages 1–6,
-
[3]
Andrea Belano, Yvan Tortorella, Angelo Garofalo, Luca Benini, Davide Rossi, and Francesco Conti. A Flexible Tem- plate for Edge Generative AI With High-Accuracy Acceler- ated Softmax and GELU.IEEE Journal on Emerging and Se- lected Topics in Circuits and Systems, 15(2):200–216, 2025. 5
2025
-
[4]
Bouwmeester, Federico Paredes-Vall ´es, and Guido C
Rik J. Bouwmeester, Federico Paredes-Vall ´es, and Guido C. H. E. de Croon. NanoFlowNet: Real-time Dense Op- tical Flow on a Nano Quadcopter. In2023 IEEE Inter- national Conference on Robotics and Automation (ICRA), pages 1996–2003, 2023. 1, 8
1996
-
[5]
Past, Present, and Future of Simultaneous Local- ization and Mapping: Toward the Robust-perception Age
Cesar Cadena, Luca Carlone, Henry Carrillo, Yasir Latif, Davide Scaramuzza, Jos ´e Neira, Ian Reid, and John J Leonard. Past, Present, and Future of Simultaneous Local- ization and Mapping: Toward the Robust-perception Age. IEEE Transactions on robotics, 32(6):1309–1332, 2016. 1
2016
-
[6]
G ´omez Rodr´ıguez, Jos´e M
Carlos Campos, Richard Elvira, Juan J. G ´omez Rodr´ıguez, Jos´e M. M. Montiel, and Juan D. Tard´os. ORB-SLAM3: An accurate open-source library for visual, visual–inertial, and multimap SLAM.IEEE Transactions on Robotics, 37(6): 1874–1890, 2021. 2, 8
2021
-
[7]
Elia Cereda, Alessandro Giusti, and Daniele Palossi. Train- ing on the Fly: On-Device Self-Supervised Learning Aboard Nano-Drones Within 20 mW.IEEE Transactions on Computer-Aided Design of Integrated Circuits and Systems, 43(11):3685–3695, 2024. 2, 4, 8
2024
-
[8]
ESVIO: Event- Based Stereo Visual Inertial Odometry.IEEE Robotics and Automation Letters, 8(6):3661–3668, 2023
Peiyu Chen, Weipeng Guan, and Peng Lu. ESVIO: Event- Based Stereo Visual Inertial Odometry.IEEE Robotics and Automation Letters, 8(6):3661–3668, 2023. 2, 5
2023
-
[9]
Direct Sparse Odometry.IEEE Transactions on Pattern Analysis and Machine Intelligence, 40(3):611–625, 2018
Jakob Engel, Vladlen Koltun, and Daniel Cremers. Direct Sparse Odometry.IEEE Transactions on Pattern Analysis and Machine Intelligence, 40(3):611–625, 2018. 2
2018
-
[10]
Project Aria: A New Tool for Egocentric Multi-Modal AI Research
Jakob Engel et al. Project Aria: A New Tool for Egocentric Multi-Modal AI Research. arXiv:2308.13561, 2023. 1
work page internal anchor Pith review arXiv 2023
-
[11]
Au- tonomous, vision-based flight and live dense 3d mapping with a quadrotor micro aerial vehicle.Journal of Field Robotics, 33(4):431–450, 2016
Matthias Faessler, Flavio Fontana, Christian Forster, Elias Mueggler, Matia Pizzoli, and Davide Scaramuzza. Au- tonomous, vision-based flight and live dense 3d mapping with a quadrotor micro aerial vehicle.Journal of Field Robotics, 33(4):431–450, 2016. 1
2016
-
[12]
EMHI: A Multimodal Egocentric Human Motion Dataset with HMD and Body-Worn IMUs.Proceedings of the AAAI Conference on Artificial Intelligence, 39(3):2879– 2887, 2025
Zhen Fan, Peng Dai, Zhuo Su, Xu Gao, Zheng Lv, Jiarui Zhang, Tianyuan Du, Guidong Wang, and Yang Zhang. EMHI: A Multimodal Egocentric Human Motion Dataset with HMD and Body-Worn IMUs.Proceedings of the AAAI Conference on Artificial Intelligence, 39(3):2879– 2887, 2025. 1
2025
-
[13]
SVO: Fast semi-direct monocular visual odometry
Christian Forster, Matia Pizzoli, and Davide Scaramuzza. SVO: Fast semi-direct monocular visual odometry. In2014 IEEE International Conference on Robotics and Automation (ICRA), pages 15–22, 2014. 2
2014
-
[14]
Sebastian Frey, Mattia Alberto Lucchini, Victor Kartsch, Thorir Mar Ingolfsson, Andrea Helga Bernardi, Michael Segessenmann, Jakub Osieleniec, Simone Benatti, Luca Benini, and Andrea Cossettini. GAPses: Versatile Smart Glasses for Comfortable and Fully-Dry Acquisition and Par- allel Ultra-Low-Power Processing of EEG and EOG.IEEE Transactions on Biomedical...
2025
-
[15]
Davison, J¨org Conradt, Kostas Daniilidis, and Da- vide Scaramuzza
Guillermo Gallego, Tobi Delbr ¨uck, Garrick Orchard, Chiara Bartolozzi, Brian Taba, Andrea Censi, Stefan Leutenegger, Andrew J. Davison, J¨org Conradt, Kostas Daniilidis, and Da- vide Scaramuzza. Event-Based Vision: A Survey.IEEE Trans. Pattern Anal. Mach. Intell., 44(1):154–180, 2022. 1, 2
2022
- [16]
-
[17]
PL-EVIO: Robust Monocular Event-Based Visual Inertial Odometry With Point and Line Features.IEEE Transactions on Automation Science and Engineering, 21(4):6277–6293,
Weipeng Guan, Peiyu Chen, Yuhan Xie, and Peng Lu. PL-EVIO: Robust Monocular Event-Based Visual Inertial Odometry With Point and Line Features.IEEE Transactions on Automation Science and Engineering, 21(4):6277–6293,
-
[18]
DEIO: Deep event inertial odometry
Weipeng Guan, Fuling Lin, Peiyu Chen, and Peng Lu. DEIO: Deep event inertial odometry. InProceedings of the IEEE/CVF International Conference on Computer Vision Workshops, pages 4606–4615, 2025. 3, 5
2025
-
[19]
Quantization and Training of Neural Net- works for Efficient Integer-Arithmetic-Only Inference
Benoit Jacob, Skirmantas Kligys, Bo Chen, Menglong Zhu, Matthew Tang, Andrew Howard, Hartwig Adam, and Dmitry Kalenichenko. Quantization and Training of Neural Net- works for Efficient Integer-Arithmetic-Only Inference. In 2018 IEEE/CVF Conference on Computer Vision and Pat- tern Recognition, pages 2704–2713, 2018. 7
2018
-
[20]
Hanme Kim, Stefan Leutenegger, and Andrew J. Davison. Real-time 3D reconstruction and 6-DoF tracking with an event camera. InEuropean Conference on Computer Vision,
-
[21]
Deep Event Visual Odometry
Simon Klenk, Marvin Motzet, Lukas Koestler, and Daniel Cremers. Deep Event Visual Odometry. In2024 Inter- national Conference on 3D Vision (3DV), pages 739–749,
-
[22]
Benchmark- ing Egocentric Visual-Inertial SLAM at City Scale
Anusha Krishnan, Shaohui Liu, Paul-Edouard Sarlin, Os- car Gentilhomme, David Caruso, Maurizio Monge, Richard Newcombe, Jakob Engel, and Marc Pollefeys. Benchmark- ing Egocentric Visual-Inertial SLAM at City Scale. In Proceedings of the IEEE/CVF International Conference on Computer Vision (ICCV), 2025. 1
2025
-
[23]
Low La- tency Visual Inertial Odometry with On-Sensor Accelerated Optical Flow for Resource-Constrained UA Vs.IEEE Sensors Journal, 25(5):7838–7847, 2025
Jonas K ¨uhne, Michele Magno, and Luca Benini. Low La- tency Visual Inertial Odometry with On-Sensor Accelerated Optical Flow for Resource-Constrained UA Vs.IEEE Sensors Journal, 25(5):7838–7847, 2025. 2, 3
2025
-
[24]
Tiny-PULP-Dronets: Squeezing Neural Networks for Faster 9 and Lighter Inference on Multi-Tasking Autonomous Nano- Drones
Lorenzo Lamberti, Vlad Niculescu, Michał Barci ´s, Lorenzo Bellone, Enrico Natalizio, Luca Benini, and Daniele Palossi. Tiny-PULP-Dronets: Squeezing Neural Networks for Faster 9 and Lighter Inference on Multi-Tasking Autonomous Nano- Drones. In2022 IEEE 4th International Conference on Arti- ficial Intelligence Circuits and Systems (AICAS), pages 287– 290, 2022. 5
2022
-
[25]
Distilling Tiny and Ultrafast Deep Neural Networks for Autonomous Navigation on Nano-Uavs.IEEE Internet of Things Journal, 11(20):33269–33281, 2024
Lorenzo Lamberti, Lorenzo Bellone, Luka Macan, En- rico Natalizio, Francesco Conti, Daniele Palossi, and Luca Benini. Distilling Tiny and Ultrafast Deep Neural Networks for Autonomous Navigation on Nano-Uavs.IEEE Internet of Things Journal, 11(20):33269–33281, 2024. 1, 2, 5, 8
2024
-
[26]
A Benchmark Analysis of Data-Driven and Geometric Approaches for Robot Ego-Motion Estimation.Journal of Field Robotics, 40(3):626–654, 2023
Marco Legittimo, Simone Felicioni, Fabio Bagni, Andrea Tagliavini, Alberto Dionigi, Francesco Gatti, Micaela Veruc- chi, Gabriele Costante, and Marko Bertogna. A Benchmark Analysis of Data-Driven and Geometric Approaches for Robot Ego-Motion Estimation.Journal of Field Robotics, 40(3):626–654, 2023. 1, 2, 3, 8
2023
-
[27]
Visual Inertial Odometry At the Edge: A Hardware-Software Co-design Approach for Ultra-low Latency and Power
Dipan Kumar Mandal, Srivatsava Jandhyala, Om J Omer, Gurpreet S Kalsi, Biji George, Gopi Neela, Santhosh Ku- mar Rethinagiri, Sreenivas Subramoney, Lance Hacking, Jim Radford, Eagle Jones, Belliappa Kuttanna, and Hong Wang. Visual Inertial Odometry At the Edge: A Hardware-Software Co-design Approach for Ultra-low Latency and Power. In 2019 Design, Automat...
2019
-
[28]
Fly, Fake-up, Find: UA V-based Energy-efficient Lo- calization for Distributed Sensor Nodes.Sustainable Com- puting: Informatics and Systems, 34:100666, 2022
Vlad Niculescu, Daniele Palossi, Michele Magno, and Luca Benini. Fly, Fake-up, Find: UA V-based Energy-efficient Lo- calization for Distributed Sensor Nodes.Sustainable Com- puting: Informatics and Systems, 34:100666, 2022. 1
2022
-
[29]
Ul- tra Low-Power Visual Odometry for Nano-Scale Unmanned Aerial Vehicles
Daniele Palossi, Andrea Marongiu, and Luca Benini. Ul- tra Low-Power Visual Odometry for Nano-Scale Unmanned Aerial Vehicles. InDesign, Automation & Test in Europe Conference & Exhibition (DATE), pages 1647–1650, 2017. 1, 3
2017
-
[30]
Fully- binarized distance computation based on-device few-shot learning for xr applications
Vivek Parmar, Sandeep Kaur Kingra, Syed Shakib Sar- war, Ziyun Li, Barbara De Salvo, and Manan Suri. Fully- binarized distance computation based on-device few-shot learning for xr applications. In2023 IEEE/CVF Confer- ence on Computer Vision and Pattern Recognition Work- shops (CVPRW), pages 4502–4508, 2023. 1
2023
-
[31]
Deep Visual Odometry with Events and Frames
Roberto Pellerito, Marco Cannici, Daniel Gehrig, Joris Bel- hadj, Olivier Dubois-Matra, Massimo Casasco, and Da- vide Scaramuzza. Deep Visual Odometry with Events and Frames. In2024 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), pages 8966–8973,
-
[32]
ISSN: 2153-0866. 3, 5
-
[33]
Circuits and Systems for Embodied AI: Exploring uJ Multi- Modal Perception for Nano-UA Vs on the Kraken Shield
Viviane Potocnik, Alfio Di Mauro, Lorenzo Lamberti, Victor Kartsch, Moritz Scherer, Francesco Conti, and Luca Benini. Circuits and Systems for Embodied AI: Exploring uJ Multi- Modal Perception for Nano-UA Vs on the Kraken Shield. In 2024 IEEE European Solid-State Electronics Research Con- ference (ESSERC), pages 1–4, 2024. 1, 2, 8
2024
-
[34]
EVO: A Geometric Approach to Event- Based 6-DOF Parallel Tracking and Mapping in Real Time
Henri Rebecq, Timo Horstschaefer, Guillermo Gallego, and Davide Scaramuzza. EVO: A Geometric Approach to Event- Based 6-DOF Parallel Tracking and Mapping in Real Time. IEEE Robotics and Automation Letters, 2(2):593–600, 2017. 2, 3, 8
2017
-
[35]
Navion: A 2-mW Fully Inte- grated Real-Time Visual-Inertial Odometry Accelerator for Autonomous Navigation of Nano Drones.IEEE Journal of Solid-State Circuits, 54(4), 2019
Amr Suleiman, Zhengdong Zhang, Luca Carlone, Sertac Karaman, and Vivienne Sze. Navion: A 2-mW Fully Inte- grated Real-Time Visual-Inertial Odometry Accelerator for Autonomous Navigation of Nano Drones.IEEE Journal of Solid-State Circuits, 54(4), 2019. 1, 3, 8
2019
-
[36]
Deep Patch Visual Odometry
Zachary Teed, Lahav Lipson, and Jia Deng. Deep Patch Visual Odometry. InProceedings of the 37th Interna- tional Conference on Neural Information Processing Sys- tems, pages 39033–39051, Red Hook, NY , USA, 2023. Cur- ran Associates Inc. 1, 2, 3, 5
2023
-
[37]
TartanAir: A Dataset to Push the Limits of Visual SLAM
Wenshan Wang, Delong Zhu, Xiangwei Wang, Yaoyu Hu, Yuheng Qiu, Chen Wang, Yafei Hu, Ashish Kapoor, and Se- bastian Scherer. TartanAir: A Dataset to Push the Limits of Visual SLAM. In2020 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), pages 4909–4916,
-
[38]
Yorke, and Yiannis Aloimonos
Chengxi Ye, Anton Mitrokhin, Cornelia Ferm ¨uller, James A. Yorke, and Yiannis Aloimonos. Unsupervised Learning of Dense Optical Flow, Depth and Egomotion with Event- Based Sensors. In2020 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), pages 5831–5838,
-
[39]
Semi-Dense 3D Re- construction with a Stereo Event Camera
Yi Zhou, Guillermo Gallego, Henri Rebecq, Laurent Kneip, Hongdong Li, and Davide Scaramuzza. Semi-Dense 3D Re- construction with a Stereo Event Camera. InComputer Vi- sion – ECCV 2018, pages 242–258. Springer International Publishing, 2018. 2, 5
2018
-
[40]
Event-Based Stereo Visual Odometry.IEEE Transactions on Robotics, 37 (5):1433–1450, 2021
Yi Zhou, Guillermo Gallego, and Shaojie Shen. Event-Based Stereo Visual Odometry.IEEE Transactions on Robotics, 37 (5):1433–1450, 2021. 3
2021
-
[41]
The Mul- tivehicle Stereo Event Camera Dataset: An Event Camera Dataset for 3D Perception.IEEE Robotics and Automation Letters, 3(3):2032–2039, 2018
Alex Zihao Zhu, Dinesh Thakur, Tolga ¨Ozaslan, Bernd Pfrommer, Vijay Kumar, and Kostas Daniilidis. The Mul- tivehicle Stereo Event Camera Dataset: An Event Camera Dataset for 3D Perception.IEEE Robotics and Automation Letters, 3(3):2032–2039, 2018. 2, 3, 5
2032
-
[42]
Unsupervised Event-Based Learning of Optical Flow, Depth, and Egomotion
Alex Zihao Zhu, Liangzhe Yuan, Kenneth Chaney, and Kostas Daniilidis. Unsupervised Event-Based Learning of Optical Flow, Depth, and Egomotion. In2019 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), pages 989–997, 2019. 3
2019
-
[43]
DEVO: Depth-event camera vi- sual odometry in challenging conditions
Yi-Fan Zuo, Jiaqi Yang, Jiaben Chen, Xia Wang, Yifu Wang, and Laurent Kneip. DEVO: Depth-event camera vi- sual odometry in challenging conditions. In2022 Inter- national Conference on Robotics and Automation (ICRA), pages 2179–2185, 2022. 3, 5 10
2022
discussion (0)
Sign in with ORCID, Apple, or X to comment. Anyone can read and Pith papers without signing in.