pith. machine review for the scientific record. sign in

arxiv: 2604.18088 · v1 · submitted 2026-04-20 · 💻 cs.CV · cs.AI· stat.AP

Recognition: unknown

Autonomous Unmanned Aircraft Systems for Enhanced Search and Rescue of Drowning Swimmers: Image-Based Localization and Mission Simulation

Authors on Pith no claims yet

Pith reviewed 2026-05-10 04:24 UTC · model grok-4.3

classification 💻 cs.CV cs.AIstat.AP
keywords unmanned aerial vehiclessearch and rescuedrowningYOLO object detectiondiscrete event simulationresponse timeautonomous systemsflotation device delivery
0
0 comments X

The pith

An unmanned aircraft system with YOLO detection and simulation reduces drowning rescue response time by a factor of five even in a minimal two-hangar setup.

A machine-rendered reading of the paper's core claim, the machinery that carries it, and where it could break.

The paper proposes deploying a drone-in-a-box UAS fleet near swimming areas to automate search and rescue for drowning swimmers through early image-based localization and flotation device drops. It builds a custom dataset, trains and evaluates multiple YOLO versions and sizes using mAP metrics, then applies discrete-event simulations to compare UAS response times against standard rescue operations. Computational tests in a German lake district demonstrate that UAS assistance shortens overall response time. The central finding is that even a small configuration with two single-UAV hangars achieves a fivefold reduction relative to conventional methods. Faster location and aid delivery matters because drowning survival depends heavily on minimizing the interval before intervention begins.

Core claim

The paper establishes that a UAS consisting of UAVs in purpose-built hangars can execute fully automated S&R missions by using YOLO for automatic distressed-swimmer detection in images, and that DES models of the full rescue timeline show this approach shortens response time by a factor of five compared with standard rescue operations when only two hangars each holding one UAV are deployed in the test region.

What carries the argument

YOLO-based image object detection trained on a custom swimmer dataset, paired with discrete-event simulation of complete response timelines for both UAS and standard rescue operations.

If this is right

  • UAS can be added to existing standard rescue operations to locate the target earlier and deliver a flotation device automatically.
  • YOLO models from versions 3, 5, and 8 in nano to extra-large sizes can be selected based on measured mAP performance for this swimmer detection task.
  • DES runs let planners choose the best number, type, and placement of hangars and UAVs by quantifying time savings for each configuration.
  • In the Lusatian Lake District example, the smallest tested UAS already delivers the reported fivefold response-time improvement.

Where Pith is reading between the lines

These are editorial extensions of the paper, not claims the author makes directly.

  • The same UAS-plus-YOLO pipeline could be adapted for other open-water emergencies where visual detection and rapid payload delivery are needed.
  • Linking the system to lifeguard dispatch centers might allow hybrid human-UAS responses that further reduce total time.
  • Validation flights in real conditions would directly test whether the simulation assumptions hold when unmodeled factors such as wind or water glare appear.

Load-bearing premise

The YOLO detector will keep high accuracy on real water surfaces under changing light and swimmer positions, and the simulation models will include every significant real-world delay without missing factors.

What would settle it

A field trial that flies the actual UAVs over swimmers in the target lake area under varied lighting and weather, records detection success rates, and compares measured end-to-end rescue times against the DES predictions.

Figures

Figures reproduced from arXiv: 2604.18088 by Armin F\"ugenschuh, Michael Breu{\ss}, Sascha Emanuel Zell, Toni Schneidereit.

Figure 1
Figure 1. Figure 1: a shows BUDDY I, a prototype water rescue UAV manufactured by MINTMASTERS GmbH and equipped with an inflatable flotation device Restube [7], which was developed and tested as part of the RescueFly project. The project also deployed the UAV hangar dRack, shown in Figure 1b, manufactured by DELTA￾Fluid Industrietechnik GmbH and additionally equipped with optical sensors, com￾puting hardware, and algorithms f… view at source ↗
Figure 2
Figure 2. Figure 2: Schematic representation of the YOLOv1 architecture, showing the pro￾cessing pipeline from input image (green) to detection output (pink/yellow) [49]. from earlier layers. The prediction tensor is extended to 80 conditional classes and three anchor bounding boxes on three different scales, resulting in a total of nine bounding boxes [53]. These anchor boxes are derived during pre-processing via di￾mension … view at source ↗
Figure 3
Figure 3. Figure 3: An overview of the UAV-captured dataset, in which heterogeneous groups of test subjects perform different standard swimming and distress behaviors. the diversity of visible human participants. Scenes display groups of varying sizes, including male and female participants of several ethnic backgrounds. While this distribution represents the ethnic structure of Eastern Germany up to a reasonable degree, furt… view at source ↗
Figure 4
Figure 4. Figure 4: An overview of the different poses and movements in our dataset (from top row to bottom row); swimming, floating, standing, under water / diving, face-down floating. minimal unlabeled initial dataset to a large, high-quality labeled dataset, enriching real-world UAV-captured footage with synthetic and augmented data to enhance detection performance. 11 [PITH_FULL_IMAGE:figures/full_fig_p011_4.png] view at source ↗
Figure 5
Figure 5. Figure 5: Synthetic image creation procedure: original swimmer image from Kaggle dataset (left), background-removed swimmer (middle), swimmer randomly placed on a lake background (right) [49]. 4 Simulation of UAV-based Water Rescue Oper￾ations Simulation is a popular tool to validate models and assess the effectiveness of so￾lutions for various EMS planning problems, such as facility location optimization, shift sch… view at source ↗
Figure 6
Figure 6. Figure 6: An illustration of the calculation of the horizontal FOV a for a top￾down UAV camera with angular FOV α and flight altitude h (The lateral FOV is analogous). vectors ϱu,t = (ϱ x u,t, ϱ y u,t) T ∈ R 2 of UAV u ∈ U at time step t ∈ Tf as well as the wind vectors wt ∈ R 2 with ϱu,t = ιu,t + wt . We are now interested in computing the true heading angles ξu,t to determine if the target is within the UAV’s FOV.… view at source ↗
Figure 7
Figure 7. Figure 7: Lusatian Lake District, Germany. We separated 1,018 images for the evaluation dataset and were left with 17,500 images for the training dataset. The latter has been augmented again, but in a slightly different way than before. In order to augment each image, we took the mentioned image manipulations from Section 3.2.2 and randomly applied two of them at the same time to one image. In other words, for each … view at source ↗
Figure 8
Figure 8. Figure 8: Detection results for all trained YOLO models and architectures. The top row shows nano variants (YOLOv3n, -v5n, -v8n), while the bottom row presents extra-large variants (YOLOv3x, -v5x, -v8x). false localizations, as only swimmers are detected, while other surface structures such as water reflections and vegetation (reed) are correctly excluded. A partic￾ [PITH_FULL_IMAGE:figures/full_fig_p020_8.png] view at source ↗
Figure 9
Figure 9. Figure 9 [PITH_FULL_IMAGE:figures/full_fig_p020_9.png] view at source ↗
Figure 10
Figure 10. Figure 10: Sequence of detection results with YOLOv8n, illustrating the transition of a swimmer’s classification from ”prob. ok” to ”prob. NOT ok”, highlighting the model’s ability to capture dynamic behavioral changes. class and the ability to recognize transitions between the classes [PITH_FULL_IMAGE:figures/full_fig_p021_10.png] view at source ↗
Figure 11
Figure 11. Figure 11: Examples of failure cases of the YOLOv5n model. pairs “amenity = fire_station” and “emergency = ambulance_station”) within a radius of 20 and 2 kilometers, respectively, from the operational area. Furthermore, we load potential water access points from OSM, using slipways and ferry terminals (key-tag pairs “leisure = slipway” and “amenity = ferry_terminal”). Since no such points are marked on OSM for one … view at source ↗
Figure 12
Figure 12. Figure 12: Illustration of an exemplary water rescue SRO in the Lusatian Lake District, Germany including the fastest path of the nearest ambulance (purple line), fire truck (orange line) and life boat (blue line) as well as predefined hot spot areas (red areas). NMCS runs, a hotspot area is sampled randomly according to the weighting of the areas in which the target is then randomly placed following a uniform distr… view at source ↗
Figure 13
Figure 13. Figure 13: Histogram of the estimated target approach time for a SRO of a distressed swimmer, based on a MCS with NMCS = 10, 000 iterations. 5.2.2 UAV-based Water Rescue Simulation The simulation approach presented in Section 4 is now use to estimate speed and reliability of locating distressed swimmers with a specific UAS configuration and search algorithm. For this, we use performance indicators average finite det… view at source ↗
Figure 14
Figure 14. Figure 14: Histograms for the simulation of 100,000 S&R operations using UAS configurations with different numbers of hangars p and respective success rates of χ(p=1) = 57.3%, χ(p=2) = 99.39%, χ(p=3) = 99.64%, and χ(p=6) = 99.54%. careful adjustment of the model parameters. 6 Conclusions In this paper, we presented a comprehensive framework for UAV-assisted water rescue in inland waters, based on an Unmanned Aerial … view at source ↗
read the original abstract

Drowning is an omnipresent risk associated with any activity on or in the water, and rescuing a drowning person is particularly challenging because of the time pressure, making a short response time important. Further complicating water rescue are unsupervised and extensive swimming areas, precise localization of the target, and the transport of rescue personnel. Technical innovations can provide a remedy: We propose an Unmanned Aircraft System (UAS), also known as a drone-in-a-box system, consisting of a fleet of Unmanned Aerial Vehicles (UAVs) allocated to purpose-built hangars near swimming areas. In an emergency, the UAS can be deployed in addition to Standard Rescue Operation (SRO) equipment to locate the distressed person early by performing a fully automated Search and Rescue (S&R) operation and dropping a flotation device. In this paper, we address automatically locating distressed swimmers using the image-based object detection architecture You Only Look Once (YOLO). We present a dataset created for this application and outline the training process. We evaluate the performance of YOLO versions 3, 5, and 8 and architecture sizes (nano, extra-large) using Mean Average Precision (mAP) metrics mAP@.5 and mAP@.5:.95. Furthermore, we present two Discrete-Event Simulation (DES) approaches to simulate response times of SRO and UAS-based water rescue. This enables estimation of time savings relative to SRO when selecting the UAS configuration (type, number, and location of UAVs and hangars). Computational experiments for a test area in the Lusatian Lake District, Germany, show that UAS assistance shortens response time. Even a small UAS with two hangars, each containing one UAV, reduces response time by a factor of five compared to SRO.

Editorial analysis

A structured set of objections, weighed in public.

Desk editor's note, referee report, simulated authors' rebuttal, and a circularity audit. Tearing a paper down is the easy half of reading it; the pith above is the substance, this is the friction.

Referee Report

3 major / 2 minor

Summary. The manuscript proposes a drone-in-a-box UAS for automated search-and-rescue of drowning swimmers. It describes creation of a custom dataset, training and mAP evaluation of YOLO v3/v5/v8 models (nano to extra-large) for image-based swimmer localization, and two discrete-event simulation (DES) models that compare UAS-assisted response times against standard rescue operations (SRO). Experiments for the Lusatian Lake District conclude that even a minimal configuration of two hangars each containing one UAV reduces response time by a factor of five relative to SRO.

Significance. If the detector performance and simulation assumptions transfer to operational conditions, the work supplies a concrete, configurable framework for quantifying time savings from UAS augmentation of water rescue. The systematic comparison across YOLO variants and the explicit DES modeling of fleet size and hangar placement are strengths that could inform deployment decisions.

major comments (3)
  1. [Abstract] Abstract: The mAP@.5 and mAP@.5:.95 results for the YOLO models are reported without dataset cardinality, train/validation/test split, number of epochs, augmentation strategy, or any measure of variability (error bars or multiple seeds). These omissions are load-bearing because the claimed usability of image-based localization directly feeds the DES inputs that produce the factor-of-five claim.
  2. [Abstract] Abstract: The factor-of-five response-time reduction for the two-hangar/one-UAV configuration is presented as a computational result, yet no values are given for flight speed, dispatch latency, flotation-device drop time, false-negative rate of the detector, or any sensitivity analysis on these parameters. Without these, the quantitative advantage cannot be reproduced or stress-tested against realistic operational variability.
  3. [Evaluation and Simulation sections] The manuscript contains no field validation, real-incident footage, or controlled tests under variable lighting, wave action, or swimmer poses. Because the central practical claim is that the UAS shortens response time in actual drowning events, the absence of any transfer evidence from the custom dataset to real water conditions is a load-bearing gap.
minor comments (2)
  1. [Abstract] The abstract states that two DES approaches are used but does not indicate how they differ in modeling detection outcomes or handling stochastic delays.
  2. [Evaluation] Standard mAP definitions and the exact IoU thresholds should be referenced or restated for readers unfamiliar with the COCO-style metrics.

Simulated Author's Rebuttal

3 responses · 0 unresolved

We thank the referee for the constructive and detailed feedback. We address each major comment point-by-point below, indicating revisions where the manuscript will be updated in the next version.

read point-by-point responses
  1. Referee: [Abstract] Abstract: The mAP@.5 and mAP@.5:.95 results for the YOLO models are reported without dataset cardinality, train/validation/test split, number of epochs, augmentation strategy, or any measure of variability (error bars or multiple seeds). These omissions are load-bearing because the claimed usability of image-based localization directly feeds the DES inputs that produce the factor-of-five claim.

    Authors: The Evaluation section of the manuscript describes the custom dataset creation, including its cardinality and the train/validation/test split ratios, along with the training process covering epochs and augmentation strategies. The abstract was intentionally kept brief. To improve accessibility, we will revise the abstract to concisely include dataset size, split details, key training hyperparameters, and a note on the single-run nature of the reported mAP values. We will also add a statement on reproducibility in the Evaluation section. revision: yes

  2. Referee: [Abstract] Abstract: The factor-of-five response-time reduction for the two-hangar/one-UAV configuration is presented as a computational result, yet no values are given for flight speed, dispatch latency, flotation-device drop time, false-negative rate of the detector, or any sensitivity analysis on these parameters. Without these, the quantitative advantage cannot be reproduced or stress-tested against realistic operational variability.

    Authors: The Simulation section specifies the parameter values used in the DES models, such as flight speeds, dispatch latencies, drop times, and incorporation of the detector's false-negative rate derived from the mAP results. The factor-of-five outcome is computed from these. We agree that sensitivity analysis would strengthen the claims and have added a dedicated subsection performing sensitivity analysis on these parameters (e.g., varying flight speed and false-negative rates) to the revised manuscript. revision: yes

  3. Referee: [Evaluation and Simulation sections] The manuscript contains no field validation, real-incident footage, or controlled tests under variable lighting, wave action, or swimmer poses. Because the central practical claim is that the UAS shortens response time in actual drowning events, the absence of any transfer evidence from the custom dataset to real water conditions is a load-bearing gap.

    Authors: This is a valid limitation of the current work, which relies on a custom dataset and simulation rather than real-time field deployments. We have expanded the Discussion section to explicitly address the gap in transfer evidence, discuss potential domain shifts due to lighting/wave conditions, and outline planned future field validation. New real-world experiments cannot be added within this revision cycle. revision: partial

Circularity Check

0 steps flagged

No circularity: simulation outputs are computed from independent detection metrics and operational parameters

full rationale

The paper creates a new swimmer dataset, trains and evaluates YOLO variants to produce mAP scores as empirical measurements, then feeds those scores plus separately assumed flight/drop/dispatch times into two DES models to compute response-time ratios. The factor-of-five claim is an output of the simulation run on the Lusatian Lake District scenario, not a re-expression of the mAP values or a fitted parameter renamed as prediction. No equations, self-citations, or uniqueness theorems are invoked that would make the result definitionally equivalent to its inputs. The derivation chain remains self-contained against external benchmarks.

Axiom & Free-Parameter Ledger

1 free parameters · 2 axioms · 0 invented entities

The central claim rests on the reliability of YOLO detection in aquatic scenes and the fidelity of the two DES models to actual rescue logistics; both are introduced without external benchmarks in the provided abstract.

free parameters (1)
  • UAS fleet size and hangar placement
    Specific numbers (two hangars, one UAV each) selected for the reported experiment; these directly determine the simulated time savings.
axioms (2)
  • domain assumption YOLO object detectors trained on the custom dataset generalize to real drowning incidents
    Invoked when claiming the image-based localization component will enable early detection.
  • domain assumption The discrete-event simulation accurately represents all relevant delays in both SRO and UAS operations
    Required for the factor-of-five reduction to hold outside the model.

pith-pipeline@v0.9.0 · 5653 in / 1356 out tokens · 43520 ms · 2026-05-10T04:24:06.857653+00:00 · methodology

discussion (0)

Sign in with ORCID, Apple, or X to comment. Anyone can read and Pith papers without signing in.

Reference graph

Works this paper leans on

81 extracted references · 49 canonical work pages

  1. [1]

    The Effects of Prac- ticing Swimming on the Psychological Tone in Adulthood

    Silviu Petrescu, Gabriel Piţigoi, and Mihaela Păunescu. The Effects of Prac- ticing Swimming on the Psychological Tone in Adulthood. Procedia-Social and Behavioral Sciences, 159:74–77, 2014. doi: 10.1016/j.sbspro.2014.12.331

  2. [2]

    Enhancing Water Safety: Exploring Recent Technological Approaches for Drowning Detection

    Salman Jalalifar, Andrew Belford, Eila Erfani, Amir Razmjou, Rouzbeh Ab- bassi, Masoud Mohseni-Dargah, and Mohsen Asadnia. Enhancing Water Safety: Exploring Recent Technological Approaches for Drowning Detection. Sensors, 24(2):331, 2024. doi: 10.3390/s24020331

  3. [3]

    Addressing Gaps in our Understanding of the Epi- demiology of Drowning at the Global, National, and Local Level

    Tessa Clemens. Addressing Gaps in our Understanding of the Epi- demiology of Drowning at the Global, National, and Local Level . Ph.D. thesis, York University, Toronto, ON, January 2017. A vailable at https://yorkspace.library.yorku.ca/server/api/core/bitstreams/ 541f7412-92b0-456f-81ce-b5bcf66892a9/content

  4. [4]

    DLRG Statistik 2023: Min- destens 378 Menschen in Deutschland ertrunken, 2023

    Deutsche Lebens-Rettungs-Gesellschaft (DLRG). DLRG Statistik 2023: Min- destens 378 Menschen in Deutschland ertrunken, 2023. URL https:// www.dlrg.de/informieren/die-dlrg/presse/statistik-ertrinken/. (Ac- cessed: 2025-10-01)

  5. [5]

    Samuel Ndueso John, I. G. Ukpabio, O. Omoruyi, Godfrey Onyiagha, Etinosa Noma-Osaghae, and K. O. Okokpujie. Design of a Drowning Rescue Alert System. Int. J. Mech. Eng. Technol. (IJMET) , 10(1):1987–1995, 2019

  6. [6]

    Unmanned Aircraft Systems

    J. von Beesten, H. Braßel, M. Breuß, H. Fricke, W. Hardt, A. Heller, E. Kern, M. Khan Mohammadi, M. Lindner, E. Pfister, T. Schneidereit, T. Stuchtey, A. M. Yarahmadi, T. Zeh, S. Zell, and T. Zügel. RescueFly – Einsatz von dezentral stationierten Drohnen (“Unmanned Aircraft Systems”, UAS) zur Un- terstützung bei der Wasserrettung in schwer zugänglichen un...

  7. [7]

    Restubes, 2024

    Restube GmbH. Restubes, 2024. URL https://restube.com/collections/ restubes. (Accessed: 2025-10-01)

  8. [8]

    Hangar System for Un- manned Aerial Vehicle Autonomous Missions

    Ariane Heller, Reda Harradi, and Wolfram Hardt. Hangar System for Un- manned Aerial Vehicle Autonomous Missions. In 2024 International Symposium ELMAR, pages 291–294, Zadar, Croatia, 2024. doi: 10.1109/ELMAR62909. 2024.10694237

  9. [9]

    Schwartz

    P.M. Schwartz. Global Data Privacy: The EU Way. NYUL Rev. , 94: 771–818, 2019. URL https://www.nyulawreview.org/wp-content/uploads/ 2019/10/NYULAWREVIEW-94-4-Schwartz.pdf

  10. [10]

    P. Zhu, L. Wen, D. Du, X. Bian, H. Fan, Q. Hu, and H. Ling. Detection and Tracking Meet Drones Challenge. IEEE Transactions on Pattern Analysis and Machine Intelligence , 44(11):7380–7399, 2022. doi: 10.1109/TPAMI.2021. 3119563

  11. [11]

    Mathew, V

    Leon Amadeus Varga, Benjamin Kiefer, Martin Messmer, and Andreas Zell. SeaDronesSee: A Maritime Benchmark for Detecting Humans in Open Water. In 2022 IEEE/CVF Winter Conference on Applications of Computer Vision (W ACV), pages 3686–3696, 2022. doi: 10.1109/WACV51458.2022.00374

  12. [12]

    Morales, R

    J. Morales, R. Vázquez-Martín, A. Mandow, D. Morilla-Cabello, and A. García- Cerezo. The UMA-SAR Dataset: Multimodal Data Collection from a Ground Vehicle During Outdoor Disaster Response Training Exercises. The Inter- national Journal of Robotics Research , 40(6-7):835–847, 2021. doi: 10.1177/ 02783649211004959

  13. [13]

    Sambolek and M

    S. Sambolek and M. Ivašić-Kos. Search and Rescue Image Dataset for Person Detection - SARD, 2021

  14. [14]

    You Only Look Once: Unified, Real-Time Object Detection

    Joseph Redmon, Santosh Divvala, Ross Girshick, and Ali Farhadi. You Only Look Once: Unified, Real-Time Object Detection. In 2016 IEEE Conference on Computer Vision and Pattern Recognition (CVPR) , pages 779–788, 2016. doi: 10.1109/CVPR.2016.91

  15. [15]

    A Comprehensive Review of YOLO Architectures in Computer Vi- sion: From YOLOv1 to YOLOv8 and YOLO-NAS

    Juan Terven, Diana-Margarita Córdova-Esparza, and Julio-Alejandro Romero- González. A Comprehensive Review of YOLO Architectures in Computer Vi- sion: From YOLOv1 to YOLOv8 and YOLO-NAS. Machine Learning and Knowledge Extraction, 5(4):1680–1716, 2023. doi: 10.3390/make5040083

  16. [16]

    W. Liu, D. Anguelov, D Erhan, C. Szegedy, S. Reed, C.Y Fu, and A.C. Berg. SSD: Single Shot MultiBox Detector. In Computer Vision – ECCV 2016, pages 21–37. Springer International Publishing, 2016. doi: 10.1007/ 978-3-319-46448-0\_2

  17. [17]

    Girshick, J

    R. Girshick, J. Donahue, T. Darrell, and J. Malik. Rich Feature Hierarchies for Accurate Object Detection and Semantic Segmentation. In 2014 IEEE Conference on Computer Vision and Pattern Recognition , pages 580–587, 2014. doi: 10.1109/CVPR.2014.81. 30

  18. [18]

    Hmidani and E

    O. Hmidani and E. M. Ismaili Alaoui. A Comprehensive Survey of the R- CNN Family for Object Detection. In 2022 5th International Conference on Advanced Communication Technologies and Networking (CommNet) , pages 1– 6, 2022. doi: 10.1109/CommNet56067.2022.9993862

  19. [19]

    Hasan, J

    S. Hasan, J. Joy, F. Ahsan, H. Khambaty, M. Agarwal, and J. Mounsef. A Water Behavior Dataset for an Image-Based Drowning Solution. In 2021 IEEE Green Energy and Smart Systems Conference (IGESSC) , pages 1–5, 2021. doi: 10.1109/IGESSC53124.2021.9618700

  20. [20]

    Terry Suh, Naveen Kuppuswamy, Tao Pang, Paul Mitiguy, Alex Alspach, and Russ Tedrake

    D. Broyles, C. R. Hayner, and K. Leung. WiSARD: A Labeled Visual and Ther- mal Image Dataset for Wilderness Search and Rescue. In 2022 IEEE/RSJ In- ternational Conference on Intelligent Robots and Systems (IROS) , pages 9467– 9474, 2022. doi: 10.1109/IROS47612.2022.9981298

  21. [21]

    Sambolek and M

    S. Sambolek and M. Ivasic-Kos. Automatic Person Detection in Search and Rescue Operations Using Deep CNN Detectors. IEEE Access, 9:37905–37922,

  22. [22]

    doi: 10.1109/ACCESS.2021.3063681

  23. [23]

    Martinez-Alpiste, G

    I. Martinez-Alpiste, G. Golcarenarenji, Q. Wang, and J.M. Alcaraz-Calero. Search and Rescue Operation using UA Vs: A Case Study. Expert Systems with Applications, 178:114937, 2021. doi: 10.1016/j.eswa.2021.114937

  24. [24]

    Adarsh, P

    P. Adarsh, P. Rathi, and M. Kumar. YOLO v3-Tiny: Object Detection and Recognition using one stage improved model. In 2020 6th International Confer- ence on Advanced Computing and Communication Systems (ICACCS) , pages 687–694, 2020. doi: 10.1109/ICACCS48705.2020.9074315

  25. [25]

    Swimmer Localization from a Moving Camera

    Long Sha, Patrick Lucey, Stuart Morgan, Dave Pease, and Sridha Sridharan. Swimmer Localization from a Moving Camera. In 2013 International Con- ference on Digital Image Computing: Techniques and Applications (DICTA) , Hobart, Australia, November 2013. IEEE. doi: 10.1109/dicta.2013.6691533

  26. [26]

    Unsupervised Human Detection with an Embedded Vision System on a Fully Autonomous UA V for Search and Rescue Operations

    Eleftherios Lygouras, Nicholas Santavas, Anastasios Taitzoglou, Konstantinos Tarchanidis, Athanasios Mitropoulos, and Antonios Gasteratos. Unsupervised Human Detection with an Embedded Vision System on a Fully Autonomous UA V for Search and Rescue Operations. Sensors, 19(16):3542, 2019. doi: 10. 3390/s19163542

  27. [27]

    Y. Li, H. Huang, Q. Xie, L. Yao, and Q. Chen. Research on a Surface Defect Detection Algorithm Based on MobileNet-SSD. Applied Sciences, 8(9), 2018. doi: 10.3390/app8091678

  28. [28]

    Complementary

    Eleftherios Lygouras and Antonios Gasteratos. A New Method to combine Detection and Tracking Algorithms for Fast and Accurate Human Localization in UA V-based SAR Operations. In2020 International Conference on Unmanned Aircraft Systems (ICUAS) , pages 1688–1696, Athens, Greece, 2020. doi: 10. 1109/ICUAS48674.2020.9213873. 31

  29. [29]

    Fast R-CNN

    Ross Girshick. Fast R-CNN. In 2015 IEEE International Conference on Com- puter Vision (ICCV) , pages 1440–1448, Santiago, Chile, December 2015. IEEE. doi: 10.1109/iccv.2015.169

  30. [30]

    2017, in Proceedings of the IEEE International Conference on Computer Vision, 2961–2969, doi: 10.1109/ICCV.2017.322

    Kaiming He, Georgia Gkioxari, Piotr Dollar, and Ross Girshick. Mask R- CNN. In Proceedings of the IEEE International Conference on Computer Vision (ICCV), pages 2961–2969, Venice, Italy, Oct 2017. doi: 10.1109/iccv.2017.322

  31. [31]

    Wider or Deeper: Revisiting the ResNet Model for Visual Recognition

    Zifeng Wu, Chunhua Shen, and Anton van den Hengel. Wider or Deeper: Revisiting the ResNet Model for Visual Recognition. Pattern Recognition, 90: 119–133, 2019. doi: 10.1016/j.patcog.2019.01.006

  32. [32]

    J. F. Henriques, R. Caseiro, P. Martins, and J. Batista. High-Speed Tracking with Kernelized Correlation Filters. IEEE Transactions on Pattern Analysis and Machine Intelligence , 37(03):583–596, mar 2015. doi: 10.1109/TPAMI. 2014.2345390

  33. [33]

    Dalal and B

    N. Dalal and B. Triggs. Histograms of Oriented Gradients for Human Detection. In 2005 IEEE Computer Society Conference on Computer Vision and Pattern Recognition (CVPR’05), volume 1, pages 886–893, San Diego, California, 2005. doi: 10.1109/CVPR.2005.177

  34. [34]

    A Simulation Model for Emergency Medical Services Call Centers

    Martin van Buuren, Geert Jan Kommer, Rob van der Mei, and Sandjai Bhulai. A Simulation Model for Emergency Medical Services Call Centers. In 2015 Winter Simulation Conference (WSC) , pages 844–855, 2015. doi: 10.1109/ WSC.2015.7408221

  35. [35]

    Simulation-Based Drone Assisted Search Operations in a River

    Mustafa Cicek, Sinan Pasli, Melih Imamoglu, Metin Yadigaroglu, Muhammed Fatih Beser, and Abdulkadir Gunduz. Simulation-Based Drone Assisted Search Operations in a River. Wilderness & Environmental Medicine, 33(3):311–317, 2022. doi: 10.1016/j.wem.2022.05.006. PMID: 35843856

  36. [36]

    Unmanned Aerial Vehicles (Drones) to Prevent Drown- ing

    Celia Seguin, Gilles Blaquière, Anderson Loundou, Pierre Michelet, and Thibaut Markarian. Unmanned Aerial Vehicles (Drones) to Prevent Drown- ing. Resuscitation, 127:63–67, 2018. doi: 10.1016/j.resuscitation.2018.04.005

  37. [37]

    Model Fidelity in Mission Scenario Simulations for Systems of Systems: A Case Study of Maritime Search and Rescue

    Sofia Schön. Model Fidelity in Mission Scenario Simulations for Systems of Systems: A Case Study of Maritime Search and Rescue . PhD thesis, Linkopings Universitet (Sweden), 2023

  38. [38]

    An Agent-based Modelling Framework for Performance Assessment of Search and Rescue Operations in the Barents Sea

    Behrooz Ashrafi, Gibeom Kim, Masoud Naseri, Javad Barabady, Sushmit Dhar, Gyunyoung Heo, and Sejin Baek. An Agent-based Modelling Framework for Performance Assessment of Search and Rescue Operations in the Barents Sea. Safety in Extreme Environments , 6(3):183–200, September 2024. doi: 10.1007/ s42797-024-00101-2

  39. [39]

    Research on Optimal Model of Maritime Search and Rescue Route for Rescue of Multiple 32 Distress Targets

    Wen-Chih Ho, Jian-Hung Shen, Chung-Ping Liu, and Yung-Wei Chen. Research on Optimal Model of Maritime Search and Rescue Route for Rescue of Multiple 32 Distress Targets. Journal of Marine Science and Engineering , 10(4):460, 2022. doi: 10.3390/jmse10040460

  40. [40]

    Simulation of Search and Rescue Operations of the Continental Coast of Portugal , volume 2, pages 213–221

    B Lee and AP Teixeira. Simulation of Search and Rescue Operations of the Continental Coast of Portugal , volume 2, pages 213–221. CRC Press, 2022. doi: 10.1201/9781003320289-23

  41. [41]

    Information System Computer Dynamics Simulation of the Water Traffic Emergency Rescue by Vensim Software

    Qing Yang and Meng Li. Information System Computer Dynamics Simulation of the Water Traffic Emergency Rescue by Vensim Software. In 2021 IEEE 3rd International Conference on Civil Aviation Safety and Information Technology (ICCASIT), pages 614–620, 2021. doi: 10.1109/ICCASIT53235.2021.9633389

  42. [42]

    Supporting Search and Rescue Operations with UA Vs

    Sonia Waharte and Niki Trigoni. Supporting Search and Rescue Operations with UA Vs. In 2010 International Conference on Emerging Security Technolo- gies, pages 142–147, Canterbury, UK, 2010. doi: 10.1109/est.2010.31

  43. [43]

    LSAR: Multi- UA V Collaboration for Search and Rescue Missions

    Ebtehal Turki Alotaibi, Shahad Saleh Alqefari, and Anis Koubaa. LSAR: Multi- UA V Collaboration for Search and Rescue Missions. IEEE Access, 7:55817– 55832, 2019. doi: 10.1109/ACCESS.2019.2912306

  44. [44]

    Schneidereit, S

    T. Schneidereit, S. Gohrenz, and M. Breuß. Object Detection Characteristics in a Learning Factory Environment using YOLOv8. In Kohei Arai, editor, Intelligent Systems and Applications , pages 288–308, Cham, 2025. Springer Nature Switzerland

  45. [45]

    Unmanned Aerial Vehicles (UA Vs): Collision A voidance Systems and Approaches.IEEE Access, 8:105139– 105155, 2020

    Jawad N Yasin, Sherif AS Mohamed, Mohammad-Hashem Haghbayan, Jukka Heikkonen, Hannu Tenhunen, and Juha Plosila. Unmanned Aerial Vehicles (UA Vs): Collision A voidance Systems and Approaches.IEEE Access, 8:105139– 105155, 2020. doi: 10.1109/access.2020.3000064

  46. [46]

    Au- tonomous Vehicle Detection System using Visible and Infrared Camera

    Jisu Kim, Sungjun Hong, Jeonghyun Baek, Euntai Kim, and Heejin Lee. Au- tonomous Vehicle Detection System using Visible and Infrared Camera. In 2012 12th International Conference on Control, Automation and Systems , pages 630– 634, Jeju, Korea (South), 2012

  47. [47]

    Obstacle Awareness and Collision A voidance Radar Sensor System for Low-Altitude Flying Smart UA V

    Young K Kwag and Jung W Kang. Obstacle Awareness and Collision A voidance Radar Sensor System for Low-Altitude Flying Smart UA V. In The 23rd Digital Avionics Systems Conference (IEEE Cat. No. 04CH37576) , volume 2, pages 12–D, Salt Lake City, UT, USA, 2004. doi: 10.1109/DASC.2004.1390837

  48. [48]

    Yolo v4, v3 and v2 for Windows and Linux, n.d

    Alexey Bochkovskiy. Yolo v4, v3 and v2 for Windows and Linux, n.d. URL https://github.com/AlexeyAB/darknet. (Accessed: 2025-10-01)

  49. [49]

    Darknet: Open Source Neural Networks in C, 2013–2016

    Joseph Redmon. Darknet: Open Source Neural Networks in C, 2013–2016. URL http://pjreddie.com/darknet/. (Accessed: 2025-10-01)

  50. [50]

    Investigating Training Datasets of Real and Synthetic Images for Outdoor Swimmer Localisation with YOLO

    Mohsen Khan Mohammadi, Toni Schneidereit, Ashkan Mansouri Yarahmadi, and Michael Breuß. Investigating Training Datasets of Real and Synthetic Images for Outdoor Swimmer Localisation with YOLO. AI, 5(2):576–593, 2024. doi: 10.3390/ai5020030. 33

  51. [51]

    Mark Everingham, S. M. Eslami, Luc Gool, Christopher K. Williams, John Winn, and Andrew Zisserman. The Pascal Visual Object Classes Challenge: A Retrospective. Int. J. Comput. Vision , 111(1):98––136, jan 2015. doi: 10.1007/ s11263-014-0733-5

  52. [52]

    YOLOv3: An Incremental Improvement, 2018

    Joseph Redmon and Ali Farhadi. YOLOv3: An Incremental Improvement, 2018

  53. [53]

    Experimental Analysis of Trustworthy In- Vehicle Intrusion Detection System Using eXplainable Artificial Intel- ligence (XAI)

    Chengyang Wang and Caiming Zhong. Adaptive Feature Pyramid Networks for Object Detection. IEEE Access, 9:107024–107032, 2021. doi: 10.1109/access. 2021.3100369

  54. [54]

    Microsoft COCO: common objects in context, in: Computer Vision - ECCV 2014 - 13th European Confer- ence, Zurich, Switzerland, September 6-12, 2014, Proceedings, Part V, Springer

    Tsung-Yi Lin, Michael Maire, Serge Belongie, James Hays, Pietro Perona, Deva Ramanan, Piotr Dollár, and C. Lawrence Zitnick. Microsoft COCO: Common Objects in Context. In David Fleet, Tomas Pajdla, Bernt Schiele, and Tinne Tuytelaars, editors, Computer Vision – ECCV 2014 , pages 740–755, Switzer- land, 2014. Springer International Publishing. doi: 10.1007...

  55. [55]

    YOLO9000: Better, Faster, Stronger

    Joseph Redmon and Ali Farhadi. YOLO9000: Better, Faster, Stronger. 2017 IEEE Conference on Computer Vision and Pattern Recognition (CVPR) , pages 6517–6525, 2016. doi: 10.1109/CVPR.2017.690

  56. [56]

    ultra- lytics/yolov5: v6.2 - YOLOv5 Classification Models, Apple M1, Reproducibil- ity, ClearML and Deci.ai integrations, August 2022

    Glenn Jocher, Ayush Chaurasia, Alex Stoken, Jirka Borovec, NanoCode012, Yonghye Kwon, TaoXie, Kalen Michael, Jiacong Fang, Imyhxy, Lorna, Colin Wong, Zeng Yifu, Abhiram V, Diego Montes, Zhiqiang Wang, Cristi Fati, Je- bastin Nadar, Laughing, UnglvKitDe, Tkianai, YxNONG, Piotr Skalski, Adam Hogan, Max Strobel, Mrinal Jain, Lorenzo Mammana, and Xylieong. ul...

  57. [57]

    YOLOv5 by Ultralytics, 2020

    Glenn Jocher. YOLOv5 by Ultralytics, 2020. URL https://github.com/ ultralytics/yolov5

  58. [58]

    CSPNet: A New Backbone that can Enhance Learning Capability of CNN

    Chien-Yao Wang, Hong-Yuan Mark Liao, Yueh-Hua Wu, Ping-Yang Chen, Jun- Wei Hsieh, and I-Hau Yeh. CSPNet: A New Backbone that can Enhance Learning Capability of CNN. In 2020 IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops (CVPR W), Seattle, W A, USA, June

  59. [59]
  60. [60]

    Path Aggregation Network for Instance Segmentation

    Shu Liu, Lu Qi, Haifang Qin, Jianping Shi, and Jiaya Jia. Path Aggregation Network for Instance Segmentation. In 2018 IEEE/CVF Conference on Com- puter Vision and Pattern Recognition , Salt Lake City, UT, USA, June 2018. IEEE. doi: 10.1109/cvpr.2018.00913

  61. [61]

    Ultralytics YOLO, January

    Glenn Jocher, Ayush Chaurasia, and Jing Qiu. Ultralytics YOLO, January

  62. [62]

    (Accessed: 2025-10-01)

    URL https://github.com/ultralytics/ultralytics. (Accessed: 2025-10-01). 34

  63. [63]

    Deep Residual Learning for Image Recognition , isbn =

    Kaiming He, Xiangyu Zhang, Shaoqing Ren, and Jian Sun. Deep Residual Learning for Image Recognition. In 2016 IEEE Conference on Computer Vision and Pattern Recognition (CVPR) , Las Vegas, NV, USA, June 2016. IEEE. doi: 10.1109/cvpr.2016.90

  64. [64]

    The European Union General Data Protection Regulation: What it is and what it means

    Chris Jay Hoofnagle, Bart van der Sloot, and Frederik Zuiderveen Borgesius. The European Union General Data Protection Regulation: What it is and what it means. SSRN Electronic Journal , 2018. doi: 10.2139/ssrn.3254511

  65. [65]

    Swimmers Dataset, 2021

    Sean Coughlin. Swimmers Dataset, 2021. URL https://www.kaggle.com/ datasets/seanmc4/swimmers. (Accessed: 2025-01-01)

  66. [66]

    YOLO-Based Object Detection in Industry 4.0 Fischertechnik Model Environment , pages 1––20

    Slavomira Schneidereit, Ashkan Mansouri Yarahmadi, Toni Schneidereit, Michael Breuß, and Marc Gebauer. YOLO-Based Object Detection in Industry 4.0 Fischertechnik Model Environment , pages 1––20. Springer Nature, Switzer- land, 2024. doi: 10.1007/978-3-031-47724-9\_1

  67. [67]

    Integrating Expected Coverage and Local Reliability for Emergency Medical Services Location Problems

    Paul Sorensen and Richard Church. Integrating Expected Coverage and Local Reliability for Emergency Medical Services Location Problems. Socio-Economic Planning Sciences, 44(1):8–18, 2010. doi: 10.1016/j.seps.2009.04.002

  68. [68]

    A Nested- Compliance Table Policy for Emergency Medical Service Systems under Re- location

    Kanchala Sudtachat, Maria E Mayorga, and Laura A Mclay. A Nested- Compliance Table Policy for Emergency Medical Service Systems under Re- location. Omega, 58:154–168, 2016. doi: 10.1016/j.omega.2015.06.001

  69. [69]

    A Review on Simulation Models applied to Emergency Medical Service Operations

    Lina Aboueljinane, Evren Sahin, and Zied Jemai. A Review on Simulation Models applied to Emergency Medical Service Operations. Computers & In- dustrial Engineering, 66(4):734–750, 2013. doi: 10.1016/j.cie.2013.09.017

  70. [70]

    Sascha Zell, Toni Schneidereit, Armin Fügenschuh, and Michael Breuß. Ad- vanced Search and Rescue Operations for Drowning Swimmers using Au- tonomous Unmanned Aircraft Systems: Location Optimization, Flight Tra- jectory Planning and Image-based Localization, 2024

  71. [71]

    Netto, and Eduardo A

    Rafael Padilla, Sergio L. Netto, and Eduardo A. B. da Silva. A Survey on Per- formance Metrics for Object-Detection Algorithms. In 2020 International Con- ference on Systems, Signals and Image Processing (IWSSIP) , Niterói, Brazil, July 2020. IEEE. doi: 10.1109/iwssip48289.2020.9145130

  72. [72]

    Andrew B. Watson. High Frame Rates and Human Vision: A View through the Window of Visibility. SMPTE Motion Imaging Journal , 122(2):18––32, March

  73. [73]

    doi: 10.5594/j18266xy

  74. [74]

    Newcombe, Adrien Angeli, and Andrew J

    Ankur Handa, Richard A. Newcombe, Adrien Angeli, and Andrew J. Davison. Real-Time Camera Tracking: When is High Frame-Rate Best? In Computer Vision – ECCV 2012 , pages 222––235, Florence, Italy, 2012. Springer. doi: 10.1007/978-3-642-33786-4\_17

  75. [75]

    Statistical analysis on stiefel and grassmann manifolds with applications in computer vision

    Konrad Schindler and Luc van Gool. Action Snippets: How Many Frames does Human Action Recognition Require? In 2008 IEEE Conference on Computer 35 Vision and Pattern Recognition , Anchorage, Alaska, USA, June 2008. IEEE. doi: 10.1109/cvpr.2008.4587730

  76. [76]

    E. W. Dijkstra. A Note on two Problems in Connexion with Graphs. Nu- merische Mathematik , 1(1):269–271, 1959. doi: 10.1007/BF01386390

  77. [77]

    Driving Speeds in Urgent and Non- urgent Ambulance Missions during Normal and Reduced Winter Speed Limit Periods—A Descriptive Study

    Jukka Pappinen and Hilla Nordquist. Driving Speeds in Urgent and Non- urgent Ambulance Missions during Normal and Reduced Winter Speed Limit Periods—A Descriptive Study. Nursing Reports , 12(1):50–58, 2022. doi: 10. 3390/nursrep12010006

  78. [78]

    Proximity and Reachability in the Plane

    Der-Tsai Lee. Proximity and Reachability in the Plane. University of Illinois at Urbana-Champaign, 1978

  79. [79]

    DIN 14961:2025-09 – Boote für die Feuerwehr, 09 2025

    DIN Deutsches Institut für Normung e.V. DIN 14961:2025-09 – Boote für die Feuerwehr, 09 2025. URL https://www.dinmedia.de/de/norm/din-14961/ 392911277. Beuth Verlag, Berlin. (Accessed: 2025-10-21)

  80. [80]

    Starkes Boot für starke Strömung, 09

    Blick aktuell. Starkes Boot für starke Strömung, 09

Showing first 80 references.