pith. machine review for the scientific record. sign in

arxiv: 2602.19577 · v2 · submitted 2026-02-23 · 💻 cs.RO

Recognition: 2 theorem links

· Lean Theorem

Chasing Ghosts: A Simulation-to-Real Olfactory Navigation Stack with Optional Vision Augmentation

Authors on Pith no claims yet

Pith reviewed 2026-05-15 20:56 UTC · model grok-4.3

classification 💻 cs.RO
keywords UAVodor source localizationolfactory navigationsimulation-to-realminimal sensor suitequadrotorlearning-based policy
0
0 comments X

The pith

A UAV locates odor sources by flying directly to them using a simulation-trained policy and only onboard sensors.

A machine-rendered reading of the paper's core claim, the machinery that carries it, and where it could break.

This paper presents a complete UAV system for online odor source localization that avoids building gas distribution maps and does not rely on external positioning infrastructure. A learning-based navigation policy is trained entirely in simulation using signals from a minimal olfaction sensor suite and then deployed on a physical quadrotor. Real-world indoor flight tests with an ethanol source demonstrate consistent source-finding behavior despite turbulent airflow and delayed sensory signals. Vision is incorporated as an optional modality to accelerate progress under favorable conditions. The authors release all hardware designs, firmware, simulation code, and datasets to enable community reproduction and extension.

Core claim

The UAV navigates directly toward an odor source using a learning-based navigation policy trained in simulation, without constructing an explicit gas distribution map or relying on external positioning systems, as validated through real-world flight experiments in a large indoor environment with an ethanol source under realistic airflow conditions.

What carries the argument

Simulation-to-real transfer of a learning-based navigation policy that converts sparse, delayed inputs from onboard olfaction hardware into direct flight commands toward the source.

If this is right

  • The UAV performs source localization without predefined coverage patterns or external infrastructure.
  • Optional vision integration can accelerate navigation when visual cues are available.
  • The approach handles realistic turbulent airflow and signal delays in large indoor spaces.
  • Full reproducibility is enabled by open-sourced firmware, simulation code, circuit designs, and datasets.

Where Pith is reading between the lines

These are editorial extensions of the paper, not claims the author makes directly.

  • The minimal sensing approach could scale to low-cost platforms for environmental monitoring or leak detection tasks.
  • Extending the simulation environment with outdoor wind models might support testing in uncontrolled field conditions.
  • The policy structure may generalize to other navigation problems involving intermittent or delayed sensory feedback.

Load-bearing premise

The learning-based navigation policy trained in simulation transfers successfully to real-world turbulent airflow with sparse and delayed sensory signals from the minimal sensor suite.

What would settle it

Repeated real-world flight experiments in which the UAV fails to approach or locate the ethanol source when running the deployed simulation-trained policy under the same indoor conditions as the reported successes.

Figures

Figures reproduced from arXiv: 2602.19577 by Kordel K. France, Latifur Khan, Ovidiu Daescu, Rohith Peddi.

Figure 1
Figure 1. Figure 1: The UAV equipped with the olfactory processing unit (OPU) [PITH_FULL_IMAGE:figures/full_fig_p001_1.png] view at source ↗
Figure 2
Figure 2. Figure 2: The UAV equipped with the olfactory processing unit (OPU) and [PITH_FULL_IMAGE:figures/full_fig_p002_2.png] view at source ↗
Figure 3
Figure 3. Figure 3: (A) From top to bottom, this panel shows the altitude, velocity, pitch, [PITH_FULL_IMAGE:figures/full_fig_p003_3.png] view at source ↗
Figure 4
Figure 4. Figure 4: Olfaction-Vision Model Architecture. Blue and green boxes construct [PITH_FULL_IMAGE:figures/full_fig_p005_4.png] view at source ↗
Figure 5
Figure 5. Figure 5: Illustration of the course developed for olfactory navigation by the [PITH_FULL_IMAGE:figures/full_fig_p006_5.png] view at source ↗
Figure 6
Figure 6. Figure 6: The IMU replay of the UAV traveling over the whole course. Unfiltered [PITH_FULL_IMAGE:figures/full_fig_p007_6.png] view at source ↗
Figure 7
Figure 7. Figure 7: We designed the control loop to model the DJI Tello control algorithms and understand how to sequence the olfaction sensors with the other sensors. [PITH_FULL_IMAGE:figures/full_fig_p012_7.png] view at source ↗
Figure 8
Figure 8. Figure 8: Prediction of the full 6-second diffusion current for chronoamperometry over various time periods at a 100-Hz measurement frequency. [PITH_FULL_IMAGE:figures/full_fig_p013_8.png] view at source ↗
Figure 9
Figure 9. Figure 9: Prediction of the full 6-second diffusion current for chronoamperometry over various time periods at a 10-Hz measurement frequency. [PITH_FULL_IMAGE:figures/full_fig_p013_9.png] view at source ↗
Figure 10
Figure 10. Figure 10: Training losses for both olfaction-vision models. Note that the GAT-based model is more accurate, but requires more compute and inference time, [PITH_FULL_IMAGE:figures/full_fig_p015_10.png] view at source ↗
Figure 11
Figure 11. Figure 11: Training losses for both Q(λ) and Expected SARSA(λ) algorithms with no blanks modeled [PITH_FULL_IMAGE:figures/full_fig_p017_11.png] view at source ↗
Figure 12
Figure 12. Figure 12: Training losses for both Q(λ) and Expected SARSA(λ) algorithms with blanks modeled. 8 [PITH_FULL_IMAGE:figures/full_fig_p017_12.png] view at source ↗
Figure 13
Figure 13. Figure 13: Q-values of the final reinforcement learning policy. F. Plume Model Environment We built a plume environment to understand behavioral dynamics of gases over time. We used the environment as a model to evaluate the effects on air volatility from changing different parameters. We later used it to help model navigation algorithms for the UAV. We built the environment on top of the Gymnasium framework and inc… view at source ↗
Figure 14
Figure 14. Figure 14: An image of the breakout board we designed to down-select the olfaction sensors for ethanol detection. [PITH_FULL_IMAGE:figures/full_fig_p019_14.png] view at source ↗
Figure 15
Figure 15. Figure 15: Electrical schematic of the breakout board above. The code repository associated with this manuscript provides Gerber files and full schematics for [PITH_FULL_IMAGE:figures/full_fig_p020_15.png] view at source ↗
Figure 16
Figure 16. Figure 16: One can additively manufacture the full UAV modification kit for both electrochemical and metal oxide sensing configurations on a single 200mm [PITH_FULL_IMAGE:figures/full_fig_p021_16.png] view at source ↗
read the original abstract

Autonomous odor source localization remains a challenging problem for aerial robots due to turbulent airflow, sparse and delayed sensory signals, and strict payload and compute constraints. While prior unmanned aerial vehicle (UAV)-based olfaction systems have demonstrated gas distribution mapping or reactive plume tracing, they rely on predefined coverage patterns, external infrastructure, or extensive sensing and coordination. In this work, we present a complete, open-source UAV system for online odor source localization using a minimal sensor suite. The system integrates custom olfaction hardware, onboard sensing, and a learning-based navigation policy trained in simulation and deployed on a real quadrotor. Through our minimal framework, the UAV is able to navigate directly toward an odor source without constructing an explicit gas distribution map or relying on external positioning systems. Vision is incorporated as an optional complementary modality to accelerate navigation under certain conditions. We validate the proposed system through real-world flight experiments in a large indoor environment using an ethanol source, demonstrating consistent source-finding behavior under realistic airflow conditions. The primary contribution of this work is a reproducible system and methodological framework for UAV-based olfactory navigation and source finding under minimal sensing assumptions. We elaborate on our hardware design and open source our UAV firmware, simulation code, olfaction-vision dataset, and circuit board to the community. Code, data, and designs will be made available at https://github.com/KordelFranceTech/ChasingGhosts.

Editorial analysis

A structured set of objections, weighed in public.

Desk editor's note, referee report, simulated authors' rebuttal, and a circularity audit. Tearing a paper down is the easy half of reading it; the pith above is the substance, this is the friction.

Referee Report

2 major / 2 minor

Summary. The manuscript presents a complete open-source UAV system for online odor source localization using a minimal sensor suite (custom olfaction hardware plus optional vision). A learning-based navigation policy is trained in simulation and deployed on a real quadrotor to navigate directly toward an odor source without constructing an explicit gas distribution map or relying on external positioning systems. The central claim is validated through real-world flight experiments in a large indoor environment with an ethanol source, demonstrating consistent source-finding behavior under realistic airflow conditions.

Significance. If the sim-to-real transfer is robust, the work is significant as a reproducible methodological framework and open-source contribution (firmware, simulation code, olfaction-vision dataset, and circuit board) for UAV olfactory navigation under strict payload, compute, and sensing constraints. It reduces reliance on mapping, external infrastructure, or extensive coordination compared to prior systems.

major comments (2)
  1. [Simulation Training and Transfer] Simulation-to-real transfer section: the manuscript provides no description of the airflow model (e.g., turbulence statistics or advection parameters), domain randomization schedule, or sensor delay/sparsity injection used during policy training. Without these details, it is impossible to evaluate whether the policy exploits simulation artifacts or genuinely transfers to real turbulent flow with delayed, sparse olfactory signals, which is load-bearing for the deployment claim.
  2. [Real-World Validation] Real-world experiments section: only qualitative statements of 'consistent source-finding behavior' are given. No quantitative metrics (success rate, mean time-to-source, path efficiency, failure modes, or statistical error analysis across trials) or comparison to baselines are reported, undermining the strength of the validation for the central claim.
minor comments (2)
  1. [Abstract and Contributions] The GitHub link is provided but the manuscript should explicitly list which exact artifacts (firmware, dataset, etc.) are released and their current status.
  2. [System Architecture] Notation for the optional vision augmentation and its integration with the olfactory policy could be clarified with a diagram or pseudocode to improve readability.

Simulated Author's Rebuttal

2 responses · 0 unresolved

We thank the referee for their constructive comments and recommendation for major revision. We address each of the major comments below and have updated the manuscript to incorporate the suggested improvements for clarity and rigor.

read point-by-point responses
  1. Referee: [Simulation Training and Transfer] Simulation-to-real transfer section: the manuscript provides no description of the airflow model (e.g., turbulence statistics or advection parameters), domain randomization schedule, or sensor delay/sparsity injection used during policy training. Without these details, it is impossible to evaluate whether the policy exploits simulation artifacts or genuinely transfers to real turbulent flow with delayed, sparse olfactory signals, which is load-bearing for the deployment claim.

    Authors: We agree with the referee that these details are essential for assessing the robustness of the sim-to-real transfer. In the revised manuscript, we have expanded the Simulation Training and Transfer section to include a comprehensive description of the airflow model, specifying the turbulence statistics and advection parameters employed. We also detail the domain randomization schedule used during policy training and the methods for injecting sensor delays and sparsity to simulate real-world conditions. These additions demonstrate that the training regimen was designed to account for turbulent flows and sparse, delayed signals, thereby supporting the validity of the real-world deployment. revision: yes

  2. Referee: [Real-World Validation] Real-world experiments section: only qualitative statements of 'consistent source-finding behavior' are given. No quantitative metrics (success rate, mean time-to-source, path efficiency, failure modes, or statistical error analysis across trials) or comparison to baselines are reported, undermining the strength of the validation for the central claim.

    Authors: We acknowledge that the original presentation of the real-world experiments relied primarily on qualitative descriptions. To address this, the revised manuscript now includes quantitative metrics such as success rates over multiple trials, mean time-to-source, path efficiency, and an analysis of failure modes with statistical error bars. We have also incorporated comparisons to relevant baseline approaches where feasible. These enhancements provide stronger empirical support for the central claim of consistent source-finding behavior. revision: yes

Circularity Check

0 steps flagged

No significant circularity; validation is independent real-world evidence

full rationale

The paper's core claim is successful sim-to-real transfer of a learned navigation policy for odor source localization on a UAV, using only onboard sensors without maps or external localization. This is supported by real-world flight experiments in an indoor environment with an ethanol source, which constitute independent physical validation rather than any reduction to simulation fits, self-definitions, or self-citation chains. No equations or derivations are presented that equate outputs to inputs by construction; the training occurs in simulation while the reported success is measured in physical trials. The absence of detailed airflow modeling or domain randomization descriptions raises robustness questions but does not create circularity, as the result is not forced by the inputs. The derivation chain remains self-contained against external benchmarks.

Axiom & Free-Parameter Ledger

0 free parameters · 0 axioms · 0 invented entities

Abstract does not specify any free parameters, axioms, or invented entities; the approach relies on standard machine learning techniques for policy training and custom hardware integration.

pith-pipeline@v0.9.0 · 5565 in / 1076 out tokens · 27245 ms · 2026-05-15T20:56:58.183730+00:00 · methodology

discussion (0)

Sign in with ORCID, Apple, or X to comment. Anyone can read and Pith papers without signing in.

Lean theorems connected to this paper

Citations machine-checked in the Pith Canon. Every link opens the source theorem in the public Lean library.

What do these tags mean?
matches
The paper's claim is directly supported by a theorem in the formal canon.
supports
The theorem supports part of the paper's argument, but the paper may add assumptions or extra steps.
extends
The paper goes beyond the formal theorem; the theorem is a base layer rather than the whole result.
uses
The paper appears to rely on the theorem as machinery.
contradicts
The paper's claim conflicts with a theorem or certificate in the canon.
unclear
Pith found a possible connection, but the passage is too broad, indirect, or ambiguous to say the theorem truly supports the claim.

Reference graph

Works this paper leans on

44 extracted references · 44 canonical work pages · 4 internal anchors

  1. [1]

    Drift in a popular metal oxide sensor dataset reveals limitations for gas classification benchmarks,

    N. Dennler, S. Rastogi, J. Fonollosa, A. van Schaik, and M. Schmuker, “Drift in a popular metal oxide sensor dataset reveals limitations for gas classification benchmarks,”Sensors and Actuators B: Chemical, vol. 361, p. 131668, 2022. [Online]. Available: https: //www.sciencedirect.com/science/article/pii/S0925400522003100

  2. [2]

    Limitations in odour recognition and generalization in a neuromorphic olfactory circuit,

    N. Dennler, A. van Schaik, and M. Schmuker, “Limitations in odour recognition and generalization in a neuromorphic olfactory circuit,” Nature Machine Intelligence, vol. 6, pp. 1451–1453, 2024

  3. [3]

    Chronoamperometry with room-temperature ionic liquids: Sub-second inference techniques,

    K. K. France, “Chronoamperometry with room-temperature ionic liquids: Sub-second inference techniques,” 2025. [Online]. Available: https://arxiv.org/abs/2506.04540

  4. [4]

    Smelling nano aerial vehicle for gas source localization and mapping,

    J. Burgu ´es, V . Hern´andez, A. J. Lilienthal, and S. Marco, “Smelling nano aerial vehicle for gas source localization and mapping,”Sensors, vol. 19, no. 3, 2019. [Online]. Available: https://www.mdpi.com/ 1424-8220/19/3/478

  5. [5]

    Design and experimental evaluation of an odor sensing method for a pocket-sized quadcopter,

    S. Shigaki, M. R. Fikri, and D. Kurabayashi, “Design and experimental evaluation of an odor sensing method for a pocket-sized quadcopter,”Sensors, vol. 18, no. 11, 2018. [Online]. Available: https://www.mdpi.com/1424-8220/18/11/3720

  6. [6]

    Sniffy bug: A fully autonomous swarm of gas-seeking nano quadcopters in cluttered environments,

    B. P. Duisterhof, S. Li, J. Burgu ´es, V . J. Reddi, and G. C. H. E. de Croon, “Sniffy bug: A fully autonomous swarm of gas-seeking nano quadcopters in cluttered environments,” in2021 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), 2021, pp. 9099– 9106

  7. [7]

    Oevs-fusion: Olfactory-enhanced visual semantic recognition framework for ground stain detection in indoor environments,

    L. Zhang, J. Shi, X. Wei, and L. Feng, “Oevs-fusion: Olfactory-enhanced visual semantic recognition framework for ground stain detection in indoor environments,”Sensors and Actuators B: Chemical, vol. 440, p. 137902, 2025. [Online]. Available: https://www.sciencedirect.com/ science/article/pii/S092540052500677X

  8. [8]

    Msa-ticnn: A dual- modal framework integrating exhaled e-nose data to assist ecg-based psychological stress evaluation for cognitive enhancement,

    L. Zhang, L. Feng, X. Wei, S. Chen, and Y . Zhang, “Msa-ticnn: A dual- modal framework integrating exhaled e-nose data to assist ecg-based psychological stress evaluation for cognitive enhancement,”Sensors and Actuators B: Chemical, vol. 450, p. 139215, 2026. [Online]. Available: https://www.sciencedirect.com/science/article/pii/S0925400525019926

  9. [9]

    Robotic odor source localization via vision and olfaction fusion navigation algorithm,

    S. Hassan, L. Wang, and K. R. Mahmud, “Robotic odor source localization via vision and olfaction fusion navigation algorithm,” Sensors, vol. 24, no. 7, 2024. [Online]. Available: https://www.mdpi. com/1424-8220/24/7/2309

  10. [10]

    SmellNet: A Large-scale Dataset for Real-world Smell Recognition

    D. Feng, C. Li, W. Dai, and P. P. Liang, “Smellnet: A large-scale dataset for real-world smell recognition,” 2025. [Online]. Available: https://arxiv.org/abs/2506.00239

  11. [11]

    New york smells: A large multimodal dataset for olfaction,

    E. Ozguroglu, J. Liang, R. Liu, M. Chiquier, M. DeTienne, W. W. Qian, A. Horowitz, A. Owens, and C. V ondrick, “New york smells: A large multimodal dataset for olfaction,” 2025. [Online]. Available: https://arxiv.org/abs/2511.20544

  12. [12]

    Emergent behaviour and neural dynamics in artificial agents tracking odour plumes,

    S. H. Singh, F. van Breugel, R. P. N. Rao, and B. W. Brunton, “Emergent behaviour and neural dynamics in artificial agents tracking odour plumes,”Nature Machine Intelligence, vol. 5, no. 1, pp. 58–70, Jan

  13. [13]

    Available: https://doi.org/10.1038/s42256-022-00599-w

    [Online]. Available: https://doi.org/10.1038/s42256-022-00599-w

  14. [14]

    DJITelloPy: Python library to interact with the DJI Tello drone,

    D. F. Escot ´e, “DJITelloPy: Python library to interact with the DJI Tello drone,” https://github.com/damiafuentes/DJITelloPy, 2019, accessed: 2025-07-23. [Online]. Available: https://github.com/ damiafuentes/DJITelloPy

  15. [15]

    Solidworks,

    D. Syst `emes, “Solidworks,” computer-Aided Design (CAD) Software. [Online]. Available: https://www.solidworks.com

  16. [16]

    Easyeda - online pcb design tool,

    J. . LCSC, “Easyeda - online pcb design tool,” https://easyeda.com/, 2024-2025, accessed: 2026-01-19

  17. [18]

    OpenAI Gym

    [Online]. Available: http://arxiv.org/abs/1606.01540

  18. [19]

    High-speed odor sensing using miniaturized electronic nose,

    N. Dennler, D. Drix, T. P. Warner, S. Rastogi, C. D. Casa, T. Ackels, A. T. Schaefer, A. van Schaik, and M. Schmuker, “High-speed odor sensing using miniaturized electronic nose,”Science Advances, vol. 10, no. 45, p. eadp1764, 2024

  19. [20]

    Learning to predict by the methods of temporal differences,

    R. S. Sutton, “Learning to predict by the methods of temporal differences,”Mach. Learn., vol. 3, no. 1, p. 9–44, Aug. 1988. [Online]. Available: https://doi.org/10.1023/A:1022633531479

  20. [22]

    The good scents company information system,

    T. G. S. Company, “The good scents company information system,” http://www.thegoodscentscompany.com/, accessed: 2025-03-08

  21. [23]

    Pmp 2001 - database of perfumery materials and perfor- mance,

    L. . Associates, “Pmp 2001 - database of perfumery materials and perfor- mance,” http://www.leffingwell.com/bacispmp.htm, accessed: 2025-03- 08

  22. [24]

    A principal odor map unifies diverse tasks in olfactory perception,

    B. K. Lee, E. J. Mayhew, B. Sanchez-Lengeling, J. N. Wei, W. W. Qian, K. A. Little, M. Andres, B. B. Nguyen, T. Moloy, J. Yasonik, J. K. Parker, R. C. Gerkin, J. D. Mainland, and A. B. Wiltschko, “A principal odor map unifies diverse tasks in olfactory perception,” Science, vol. 381, no. 6661, pp. 999–1006, 2023. [Online]. Available: https://www.science.o...

  23. [25]

    Microsoft COCO: Common Objects in Context

    T. Lin, M. Maire, S. J. Belongie, L. D. Bourdev, R. B. Girshick, J. Hays, P. Perona, D. Ramanan, P. Doll’a r, and C. L. Zitnick, “Microsoft COCO: common objects in context,”CoRR, vol. abs/1405.0312, 2014. [Online]. Available: http://arxiv.org/abs/1405.0312

  24. [26]

    Sniff ai: Is my ’spicy’ your ’spicy’? exploring llm’s perceptual alignment with human smell experiences,

    S. Zhong, Z. Zhou, C. Dawes, G. Brianz, and M. Obrist, “Sniff ai: Is my ’spicy’ your ’spicy’? exploring llm’s perceptual alignment with human smell experiences,” 2024. [Online]. Available: https://arxiv.org/abs/2411.06950

  25. [27]

    Sigmoid loss for language image pre-training,

    X. Zhai, B. Mustafa, A. Kolesnikov, and L. Beyer, “Sigmoid loss for language image pre-training,” 2023. [Online]. Available: https://arxiv.org/abs/2303.15343

  26. [28]

    Representation learning with contrastive predictive coding,

    A. van den Oord, Y . Li, and O. Vinyals, “Representation learning with contrastive predictive coding,” 2019

  27. [29]

    Learning to fly in seconds,

    J. Eschmann, D. Albani, and G. Loianno, “Learning to fly in seconds,” IEEE Robotics and Automation Letters, vol. 9, no. 7, pp. 6336–6343, 2024

  28. [30]

    YOLOv11: An Overview of the Key Architectural Enhancements

    R. Khanam and M. Hussain, “Yolov11: An overview of the key architectural enhancements,” 2024. [Online]. Available: https: //arxiv.org/abs/2410.17725

  29. [31]

    Pursuit-evasion with decentralized robotic swarm in continuous state space and action space via deep rein- forcement learning,

    G. Singh, D. Lofaro, and D. Sofge, “Pursuit-evasion with decentralized robotic swarm in continuous state space and action space via deep rein- forcement learning,” inProceedings of the 12th International Conference on Agents and Artificial Intelligence - Volume 1: ICAART,, INSTICC. SciTePress, 2020, pp. 226–233

  30. [32]

    Active sensing in a dynamic olfactory world,

    J. Crimaldi, H. Lei, A. Schaefer, M. Schmuker, B. H. Smith, A. C. True, J. V . Verhagen, and J. D. Victor, “Active sensing in a dynamic olfactory world,”Journal of Computational Neuroscience, 9 vol. 50, no. 1, pp. 1–6, Feb 2022. [Online]. Available: https: //doi.org/10.1007/s10827-021-00798-1

  31. [33]

    Exploiting plume structure to decode gas source distance using metal-oxide gas sensors,

    M. Schmuker, V . Bahr, and R. Huerta, “Exploiting plume structure to decode gas source distance using metal-oxide gas sensors,”Sensors and Actuators B: Chemical, vol. 235, pp. 636–646, 2016. [Online]. Available: https://www.sciencedirect.com/science/article/pii/S0925400516307833

  32. [35]

    Neuromorphic principles for machine olfaction,

    N. Dennler, A. True, A. van Schaik, and M. Schmuker, “Neuromorphic principles for machine olfaction,”Neuromorphic Computing and Engineering, vol. 5, no. 2, p. 023001, may 2025. [Online]. Available: https://dx.doi.org/10.1088/2634-4386/add0dc

  33. [36]

    On the application of statistical concepts to the buffeting problem,

    H. W. LIEPMANN, “On the application of statistical concepts to the buffeting problem,”Journal of the Aeronautical Sciences, vol. 19, no. 12, pp. 793–800, 1952. [Online]. Available: https://doi.org/10.2514/8.2491

  34. [37]

    Learning from delayed rewards,

    C. Watkins, “Learning from delayed rewards,” Ph.D. dissertation, De- partment of Computer Science, Kings College, Cambridge University, 1989

  35. [38]

    R. S. Sutton and A. G. Barto,Reinforcement Learning: An Introduction. MIT Press, 2018

  36. [39]

    Olfactory inertial odometry: Sensor calibration and drift compensation,

    K. K. France, O. Daescu, A. Paul, and S. Prasad, “Olfactory inertial odometry: Sensor calibration and drift compensation,” 2025. [Online]. Available: https://arxiv.org/abs/2506.04539

  37. [40]

    Lessons from the german tank problem,

    G. Clark, A. Gonye, and S. J. Miller, “Lessons from the german tank problem,” 2021. [Online]. Available: https://arxiv.org/abs/2101.08162

  38. [41]

    A bayesian treatment of the german tank problem,

    C. M. Simon, “A bayesian treatment of the german tank problem,” The Mathematical Intelligencer, vol. 46, no. 2, pp. 117–127, Jun 2024. [Online]. Available: https://doi.org/10.1007/s00283-023-10274-6

  39. [42]

    Goodfellow, Y

    I. Goodfellow, Y . Bengio, and A. Courville,Deep Learning. MIT Press, 2016, http://www.deeplearningbook.org

  40. [43]

    Diffusion graph neural networks and dataset for robust olfactory navigation in hazard robotics,

    K. K. France and O. Daescu, “Diffusion graph neural networks and dataset for robust olfactory navigation in hazard robotics,” 2025. [Online]. Available: https://arxiv.org/abs/2506.00455v3

  41. [44]

    Scentience-ovlc-v1: Joint olfaction-vision-language classifiers,

    K. K. France, “Scentience-ovlc-v1: Joint olfaction-vision-language classifiers,” Hugging Face, 2025. [Online]. Available: https: //huggingface.co/kordelfrance/Olfaction-Vision-Language-Classifiers

  42. [45]

    R. S. Sutton and A. G. Barto,Reinforcement Learning: An Introduction, 2nd ed. The MIT Press, 2018

  43. [47]

    Emergent behavior in evolutionary swarms for machine olfaction,

    K. K. France, A. Paul, I. Banga, and S. Prasad, “Emergent behavior in evolutionary swarms for machine olfaction,” inProceedings of the Genetic and Evolutionary Computation Conference, ser. GECCO ’24. New York, NY , USA: Association for Computing Machinery, 2024, p. 30–38. [Online]. Available: https://doi.org/10.1145/3583131.3590376

  44. [48]

    Scentience app: Olfactory interface instrument,

    Scentience, “Scentience app: Olfactory interface instrument,” 2025, accessed: 2025-03-02. [Online]. Available: https://scentience.ai/news/f/ scentience-app-worlds-first-olfaction-vision-language-model 1 SUPPLEMENTARYMATERIAL A. Overview of Related Work We present a high-level comparison of our work to previous research in olfaction navigation. We strove t...