Recognition: unknown
Autonomous Unmanned Aircraft Systems for Enhanced Search and Rescue of Drowning Swimmers: Image-Based Localization and Mission Simulation
Pith reviewed 2026-05-10 04:24 UTC · model grok-4.3
The pith
An unmanned aircraft system with YOLO detection and simulation reduces drowning rescue response time by a factor of five even in a minimal two-hangar setup.
A machine-rendered reading of the paper's core claim, the machinery that carries it, and where it could break.
Core claim
The paper establishes that a UAS consisting of UAVs in purpose-built hangars can execute fully automated S&R missions by using YOLO for automatic distressed-swimmer detection in images, and that DES models of the full rescue timeline show this approach shortens response time by a factor of five compared with standard rescue operations when only two hangars each holding one UAV are deployed in the test region.
What carries the argument
YOLO-based image object detection trained on a custom swimmer dataset, paired with discrete-event simulation of complete response timelines for both UAS and standard rescue operations.
If this is right
- UAS can be added to existing standard rescue operations to locate the target earlier and deliver a flotation device automatically.
- YOLO models from versions 3, 5, and 8 in nano to extra-large sizes can be selected based on measured mAP performance for this swimmer detection task.
- DES runs let planners choose the best number, type, and placement of hangars and UAVs by quantifying time savings for each configuration.
- In the Lusatian Lake District example, the smallest tested UAS already delivers the reported fivefold response-time improvement.
Where Pith is reading between the lines
- The same UAS-plus-YOLO pipeline could be adapted for other open-water emergencies where visual detection and rapid payload delivery are needed.
- Linking the system to lifeguard dispatch centers might allow hybrid human-UAS responses that further reduce total time.
- Validation flights in real conditions would directly test whether the simulation assumptions hold when unmodeled factors such as wind or water glare appear.
Load-bearing premise
The YOLO detector will keep high accuracy on real water surfaces under changing light and swimmer positions, and the simulation models will include every significant real-world delay without missing factors.
What would settle it
A field trial that flies the actual UAVs over swimmers in the target lake area under varied lighting and weather, records detection success rates, and compares measured end-to-end rescue times against the DES predictions.
Figures
read the original abstract
Drowning is an omnipresent risk associated with any activity on or in the water, and rescuing a drowning person is particularly challenging because of the time pressure, making a short response time important. Further complicating water rescue are unsupervised and extensive swimming areas, precise localization of the target, and the transport of rescue personnel. Technical innovations can provide a remedy: We propose an Unmanned Aircraft System (UAS), also known as a drone-in-a-box system, consisting of a fleet of Unmanned Aerial Vehicles (UAVs) allocated to purpose-built hangars near swimming areas. In an emergency, the UAS can be deployed in addition to Standard Rescue Operation (SRO) equipment to locate the distressed person early by performing a fully automated Search and Rescue (S&R) operation and dropping a flotation device. In this paper, we address automatically locating distressed swimmers using the image-based object detection architecture You Only Look Once (YOLO). We present a dataset created for this application and outline the training process. We evaluate the performance of YOLO versions 3, 5, and 8 and architecture sizes (nano, extra-large) using Mean Average Precision (mAP) metrics mAP@.5 and mAP@.5:.95. Furthermore, we present two Discrete-Event Simulation (DES) approaches to simulate response times of SRO and UAS-based water rescue. This enables estimation of time savings relative to SRO when selecting the UAS configuration (type, number, and location of UAVs and hangars). Computational experiments for a test area in the Lusatian Lake District, Germany, show that UAS assistance shortens response time. Even a small UAS with two hangars, each containing one UAV, reduces response time by a factor of five compared to SRO.
Editorial analysis
A structured set of objections, weighed in public.
Referee Report
Summary. The manuscript proposes a drone-in-a-box UAS for automated search-and-rescue of drowning swimmers. It describes creation of a custom dataset, training and mAP evaluation of YOLO v3/v5/v8 models (nano to extra-large) for image-based swimmer localization, and two discrete-event simulation (DES) models that compare UAS-assisted response times against standard rescue operations (SRO). Experiments for the Lusatian Lake District conclude that even a minimal configuration of two hangars each containing one UAV reduces response time by a factor of five relative to SRO.
Significance. If the detector performance and simulation assumptions transfer to operational conditions, the work supplies a concrete, configurable framework for quantifying time savings from UAS augmentation of water rescue. The systematic comparison across YOLO variants and the explicit DES modeling of fleet size and hangar placement are strengths that could inform deployment decisions.
major comments (3)
- [Abstract] Abstract: The mAP@.5 and mAP@.5:.95 results for the YOLO models are reported without dataset cardinality, train/validation/test split, number of epochs, augmentation strategy, or any measure of variability (error bars or multiple seeds). These omissions are load-bearing because the claimed usability of image-based localization directly feeds the DES inputs that produce the factor-of-five claim.
- [Abstract] Abstract: The factor-of-five response-time reduction for the two-hangar/one-UAV configuration is presented as a computational result, yet no values are given for flight speed, dispatch latency, flotation-device drop time, false-negative rate of the detector, or any sensitivity analysis on these parameters. Without these, the quantitative advantage cannot be reproduced or stress-tested against realistic operational variability.
- [Evaluation and Simulation sections] The manuscript contains no field validation, real-incident footage, or controlled tests under variable lighting, wave action, or swimmer poses. Because the central practical claim is that the UAS shortens response time in actual drowning events, the absence of any transfer evidence from the custom dataset to real water conditions is a load-bearing gap.
minor comments (2)
- [Abstract] The abstract states that two DES approaches are used but does not indicate how they differ in modeling detection outcomes or handling stochastic delays.
- [Evaluation] Standard mAP definitions and the exact IoU thresholds should be referenced or restated for readers unfamiliar with the COCO-style metrics.
Simulated Author's Rebuttal
We thank the referee for the constructive and detailed feedback. We address each major comment point-by-point below, indicating revisions where the manuscript will be updated in the next version.
read point-by-point responses
-
Referee: [Abstract] Abstract: The mAP@.5 and mAP@.5:.95 results for the YOLO models are reported without dataset cardinality, train/validation/test split, number of epochs, augmentation strategy, or any measure of variability (error bars or multiple seeds). These omissions are load-bearing because the claimed usability of image-based localization directly feeds the DES inputs that produce the factor-of-five claim.
Authors: The Evaluation section of the manuscript describes the custom dataset creation, including its cardinality and the train/validation/test split ratios, along with the training process covering epochs and augmentation strategies. The abstract was intentionally kept brief. To improve accessibility, we will revise the abstract to concisely include dataset size, split details, key training hyperparameters, and a note on the single-run nature of the reported mAP values. We will also add a statement on reproducibility in the Evaluation section. revision: yes
-
Referee: [Abstract] Abstract: The factor-of-five response-time reduction for the two-hangar/one-UAV configuration is presented as a computational result, yet no values are given for flight speed, dispatch latency, flotation-device drop time, false-negative rate of the detector, or any sensitivity analysis on these parameters. Without these, the quantitative advantage cannot be reproduced or stress-tested against realistic operational variability.
Authors: The Simulation section specifies the parameter values used in the DES models, such as flight speeds, dispatch latencies, drop times, and incorporation of the detector's false-negative rate derived from the mAP results. The factor-of-five outcome is computed from these. We agree that sensitivity analysis would strengthen the claims and have added a dedicated subsection performing sensitivity analysis on these parameters (e.g., varying flight speed and false-negative rates) to the revised manuscript. revision: yes
-
Referee: [Evaluation and Simulation sections] The manuscript contains no field validation, real-incident footage, or controlled tests under variable lighting, wave action, or swimmer poses. Because the central practical claim is that the UAS shortens response time in actual drowning events, the absence of any transfer evidence from the custom dataset to real water conditions is a load-bearing gap.
Authors: This is a valid limitation of the current work, which relies on a custom dataset and simulation rather than real-time field deployments. We have expanded the Discussion section to explicitly address the gap in transfer evidence, discuss potential domain shifts due to lighting/wave conditions, and outline planned future field validation. New real-world experiments cannot be added within this revision cycle. revision: partial
Circularity Check
No circularity: simulation outputs are computed from independent detection metrics and operational parameters
full rationale
The paper creates a new swimmer dataset, trains and evaluates YOLO variants to produce mAP scores as empirical measurements, then feeds those scores plus separately assumed flight/drop/dispatch times into two DES models to compute response-time ratios. The factor-of-five claim is an output of the simulation run on the Lusatian Lake District scenario, not a re-expression of the mAP values or a fitted parameter renamed as prediction. No equations, self-citations, or uniqueness theorems are invoked that would make the result definitionally equivalent to its inputs. The derivation chain remains self-contained against external benchmarks.
Axiom & Free-Parameter Ledger
free parameters (1)
- UAS fleet size and hangar placement
axioms (2)
- domain assumption YOLO object detectors trained on the custom dataset generalize to real drowning incidents
- domain assumption The discrete-event simulation accurately represents all relevant delays in both SRO and UAS operations
Reference graph
Works this paper leans on
-
[1]
The Effects of Prac- ticing Swimming on the Psychological Tone in Adulthood
Silviu Petrescu, Gabriel Piţigoi, and Mihaela Păunescu. The Effects of Prac- ticing Swimming on the Psychological Tone in Adulthood. Procedia-Social and Behavioral Sciences, 159:74–77, 2014. doi: 10.1016/j.sbspro.2014.12.331
-
[2]
Enhancing Water Safety: Exploring Recent Technological Approaches for Drowning Detection
Salman Jalalifar, Andrew Belford, Eila Erfani, Amir Razmjou, Rouzbeh Ab- bassi, Masoud Mohseni-Dargah, and Mohsen Asadnia. Enhancing Water Safety: Exploring Recent Technological Approaches for Drowning Detection. Sensors, 24(2):331, 2024. doi: 10.3390/s24020331
-
[3]
Addressing Gaps in our Understanding of the Epi- demiology of Drowning at the Global, National, and Local Level
Tessa Clemens. Addressing Gaps in our Understanding of the Epi- demiology of Drowning at the Global, National, and Local Level . Ph.D. thesis, York University, Toronto, ON, January 2017. A vailable at https://yorkspace.library.yorku.ca/server/api/core/bitstreams/ 541f7412-92b0-456f-81ce-b5bcf66892a9/content
2017
-
[4]
DLRG Statistik 2023: Min- destens 378 Menschen in Deutschland ertrunken, 2023
Deutsche Lebens-Rettungs-Gesellschaft (DLRG). DLRG Statistik 2023: Min- destens 378 Menschen in Deutschland ertrunken, 2023. URL https:// www.dlrg.de/informieren/die-dlrg/presse/statistik-ertrinken/. (Ac- cessed: 2025-10-01)
2023
-
[5]
Samuel Ndueso John, I. G. Ukpabio, O. Omoruyi, Godfrey Onyiagha, Etinosa Noma-Osaghae, and K. O. Okokpujie. Design of a Drowning Rescue Alert System. Int. J. Mech. Eng. Technol. (IJMET) , 10(1):1987–1995, 2019
1987
-
[6]
Unmanned Aircraft Systems
J. von Beesten, H. Braßel, M. Breuß, H. Fricke, W. Hardt, A. Heller, E. Kern, M. Khan Mohammadi, M. Lindner, E. Pfister, T. Schneidereit, T. Stuchtey, A. M. Yarahmadi, T. Zeh, S. Zell, and T. Zügel. RescueFly – Einsatz von dezentral stationierten Drohnen (“Unmanned Aircraft Systems”, UAS) zur Un- terstützung bei der Wasserrettung in schwer zugänglichen un...
2024
-
[7]
Restubes, 2024
Restube GmbH. Restubes, 2024. URL https://restube.com/collections/ restubes. (Accessed: 2025-10-01)
2024
-
[8]
Hangar System for Un- manned Aerial Vehicle Autonomous Missions
Ariane Heller, Reda Harradi, and Wolfram Hardt. Hangar System for Un- manned Aerial Vehicle Autonomous Missions. In 2024 International Symposium ELMAR, pages 291–294, Zadar, Croatia, 2024. doi: 10.1109/ELMAR62909. 2024.10694237
-
[9]
Schwartz
P.M. Schwartz. Global Data Privacy: The EU Way. NYUL Rev. , 94: 771–818, 2019. URL https://www.nyulawreview.org/wp-content/uploads/ 2019/10/NYULAWREVIEW-94-4-Schwartz.pdf
2019
-
[10]
P. Zhu, L. Wen, D. Du, X. Bian, H. Fan, Q. Hu, and H. Ling. Detection and Tracking Meet Drones Challenge. IEEE Transactions on Pattern Analysis and Machine Intelligence , 44(11):7380–7399, 2022. doi: 10.1109/TPAMI.2021. 3119563
-
[11]
Leon Amadeus Varga, Benjamin Kiefer, Martin Messmer, and Andreas Zell. SeaDronesSee: A Maritime Benchmark for Detecting Humans in Open Water. In 2022 IEEE/CVF Winter Conference on Applications of Computer Vision (W ACV), pages 3686–3696, 2022. doi: 10.1109/WACV51458.2022.00374
-
[12]
Morales, R
J. Morales, R. Vázquez-Martín, A. Mandow, D. Morilla-Cabello, and A. García- Cerezo. The UMA-SAR Dataset: Multimodal Data Collection from a Ground Vehicle During Outdoor Disaster Response Training Exercises. The Inter- national Journal of Robotics Research , 40(6-7):835–847, 2021. doi: 10.1177/ 02783649211004959
2021
-
[13]
Sambolek and M
S. Sambolek and M. Ivašić-Kos. Search and Rescue Image Dataset for Person Detection - SARD, 2021
2021
-
[14]
You Only Look Once: Unified, Real-Time Object Detection
Joseph Redmon, Santosh Divvala, Ross Girshick, and Ali Farhadi. You Only Look Once: Unified, Real-Time Object Detection. In 2016 IEEE Conference on Computer Vision and Pattern Recognition (CVPR) , pages 779–788, 2016. doi: 10.1109/CVPR.2016.91
-
[15]
Juan Terven, Diana-Margarita Córdova-Esparza, and Julio-Alejandro Romero- González. A Comprehensive Review of YOLO Architectures in Computer Vi- sion: From YOLOv1 to YOLOv8 and YOLO-NAS. Machine Learning and Knowledge Extraction, 5(4):1680–1716, 2023. doi: 10.3390/make5040083
-
[16]
W. Liu, D. Anguelov, D Erhan, C. Szegedy, S. Reed, C.Y Fu, and A.C. Berg. SSD: Single Shot MultiBox Detector. In Computer Vision – ECCV 2016, pages 21–37. Springer International Publishing, 2016. doi: 10.1007/ 978-3-319-46448-0\_2
2016
-
[17]
R. Girshick, J. Donahue, T. Darrell, and J. Malik. Rich Feature Hierarchies for Accurate Object Detection and Semantic Segmentation. In 2014 IEEE Conference on Computer Vision and Pattern Recognition , pages 580–587, 2014. doi: 10.1109/CVPR.2014.81. 30
-
[18]
O. Hmidani and E. M. Ismaili Alaoui. A Comprehensive Survey of the R- CNN Family for Object Detection. In 2022 5th International Conference on Advanced Communication Technologies and Networking (CommNet) , pages 1– 6, 2022. doi: 10.1109/CommNet56067.2022.9993862
-
[19]
S. Hasan, J. Joy, F. Ahsan, H. Khambaty, M. Agarwal, and J. Mounsef. A Water Behavior Dataset for an Image-Based Drowning Solution. In 2021 IEEE Green Energy and Smart Systems Conference (IGESSC) , pages 1–5, 2021. doi: 10.1109/IGESSC53124.2021.9618700
-
[20]
Terry Suh, Naveen Kuppuswamy, Tao Pang, Paul Mitiguy, Alex Alspach, and Russ Tedrake
D. Broyles, C. R. Hayner, and K. Leung. WiSARD: A Labeled Visual and Ther- mal Image Dataset for Wilderness Search and Rescue. In 2022 IEEE/RSJ In- ternational Conference on Intelligent Robots and Systems (IROS) , pages 9467– 9474, 2022. doi: 10.1109/IROS47612.2022.9981298
-
[21]
Sambolek and M
S. Sambolek and M. Ivasic-Kos. Automatic Person Detection in Search and Rescue Operations Using Deep CNN Detectors. IEEE Access, 9:37905–37922,
-
[22]
doi: 10.1109/ACCESS.2021.3063681
-
[23]
I. Martinez-Alpiste, G. Golcarenarenji, Q. Wang, and J.M. Alcaraz-Calero. Search and Rescue Operation using UA Vs: A Case Study. Expert Systems with Applications, 178:114937, 2021. doi: 10.1016/j.eswa.2021.114937
-
[24]
P. Adarsh, P. Rathi, and M. Kumar. YOLO v3-Tiny: Object Detection and Recognition using one stage improved model. In 2020 6th International Confer- ence on Advanced Computing and Communication Systems (ICACCS) , pages 687–694, 2020. doi: 10.1109/ICACCS48705.2020.9074315
-
[25]
Swimmer Localization from a Moving Camera
Long Sha, Patrick Lucey, Stuart Morgan, Dave Pease, and Sridha Sridharan. Swimmer Localization from a Moving Camera. In 2013 International Con- ference on Digital Image Computing: Techniques and Applications (DICTA) , Hobart, Australia, November 2013. IEEE. doi: 10.1109/dicta.2013.6691533
-
[26]
Unsupervised Human Detection with an Embedded Vision System on a Fully Autonomous UA V for Search and Rescue Operations
Eleftherios Lygouras, Nicholas Santavas, Anastasios Taitzoglou, Konstantinos Tarchanidis, Athanasios Mitropoulos, and Antonios Gasteratos. Unsupervised Human Detection with an Embedded Vision System on a Fully Autonomous UA V for Search and Rescue Operations. Sensors, 19(16):3542, 2019. doi: 10. 3390/s19163542
2019
-
[27]
Y. Li, H. Huang, Q. Xie, L. Yao, and Q. Chen. Research on a Surface Defect Detection Algorithm Based on MobileNet-SSD. Applied Sciences, 8(9), 2018. doi: 10.3390/app8091678
-
[28]
Eleftherios Lygouras and Antonios Gasteratos. A New Method to combine Detection and Tracking Algorithms for Fast and Accurate Human Localization in UA V-based SAR Operations. In2020 International Conference on Unmanned Aircraft Systems (ICUAS) , pages 1688–1696, Athens, Greece, 2020. doi: 10. 1109/ICUAS48674.2020.9213873. 31
-
[29]
Ross Girshick. Fast R-CNN. In 2015 IEEE International Conference on Com- puter Vision (ICCV) , pages 1440–1448, Santiago, Chile, December 2015. IEEE. doi: 10.1109/iccv.2015.169
-
[30]
Kaiming He, Georgia Gkioxari, Piotr Dollar, and Ross Girshick. Mask R- CNN. In Proceedings of the IEEE International Conference on Computer Vision (ICCV), pages 2961–2969, Venice, Italy, Oct 2017. doi: 10.1109/iccv.2017.322
-
[31]
Wider or Deeper: Revisiting the ResNet Model for Visual Recognition
Zifeng Wu, Chunhua Shen, and Anton van den Hengel. Wider or Deeper: Revisiting the ResNet Model for Visual Recognition. Pattern Recognition, 90: 119–133, 2019. doi: 10.1016/j.patcog.2019.01.006
-
[32]
J. F. Henriques, R. Caseiro, P. Martins, and J. Batista. High-Speed Tracking with Kernelized Correlation Filters. IEEE Transactions on Pattern Analysis and Machine Intelligence , 37(03):583–596, mar 2015. doi: 10.1109/TPAMI. 2014.2345390
-
[33]
N. Dalal and B. Triggs. Histograms of Oriented Gradients for Human Detection. In 2005 IEEE Computer Society Conference on Computer Vision and Pattern Recognition (CVPR’05), volume 1, pages 886–893, San Diego, California, 2005. doi: 10.1109/CVPR.2005.177
-
[34]
A Simulation Model for Emergency Medical Services Call Centers
Martin van Buuren, Geert Jan Kommer, Rob van der Mei, and Sandjai Bhulai. A Simulation Model for Emergency Medical Services Call Centers. In 2015 Winter Simulation Conference (WSC) , pages 844–855, 2015. doi: 10.1109/ WSC.2015.7408221
-
[35]
Simulation-Based Drone Assisted Search Operations in a River
Mustafa Cicek, Sinan Pasli, Melih Imamoglu, Metin Yadigaroglu, Muhammed Fatih Beser, and Abdulkadir Gunduz. Simulation-Based Drone Assisted Search Operations in a River. Wilderness & Environmental Medicine, 33(3):311–317, 2022. doi: 10.1016/j.wem.2022.05.006. PMID: 35843856
-
[36]
Unmanned Aerial Vehicles (Drones) to Prevent Drown- ing
Celia Seguin, Gilles Blaquière, Anderson Loundou, Pierre Michelet, and Thibaut Markarian. Unmanned Aerial Vehicles (Drones) to Prevent Drown- ing. Resuscitation, 127:63–67, 2018. doi: 10.1016/j.resuscitation.2018.04.005
-
[37]
Model Fidelity in Mission Scenario Simulations for Systems of Systems: A Case Study of Maritime Search and Rescue
Sofia Schön. Model Fidelity in Mission Scenario Simulations for Systems of Systems: A Case Study of Maritime Search and Rescue . PhD thesis, Linkopings Universitet (Sweden), 2023
2023
-
[38]
An Agent-based Modelling Framework for Performance Assessment of Search and Rescue Operations in the Barents Sea
Behrooz Ashrafi, Gibeom Kim, Masoud Naseri, Javad Barabady, Sushmit Dhar, Gyunyoung Heo, and Sejin Baek. An Agent-based Modelling Framework for Performance Assessment of Search and Rescue Operations in the Barents Sea. Safety in Extreme Environments , 6(3):183–200, September 2024. doi: 10.1007/ s42797-024-00101-2
2024
-
[39]
Wen-Chih Ho, Jian-Hung Shen, Chung-Ping Liu, and Yung-Wei Chen. Research on Optimal Model of Maritime Search and Rescue Route for Rescue of Multiple 32 Distress Targets. Journal of Marine Science and Engineering , 10(4):460, 2022. doi: 10.3390/jmse10040460
-
[40]
B Lee and AP Teixeira. Simulation of Search and Rescue Operations of the Continental Coast of Portugal , volume 2, pages 213–221. CRC Press, 2022. doi: 10.1201/9781003320289-23
-
[41]
Qing Yang and Meng Li. Information System Computer Dynamics Simulation of the Water Traffic Emergency Rescue by Vensim Software. In 2021 IEEE 3rd International Conference on Civil Aviation Safety and Information Technology (ICCASIT), pages 614–620, 2021. doi: 10.1109/ICCASIT53235.2021.9633389
-
[42]
Supporting Search and Rescue Operations with UA Vs
Sonia Waharte and Niki Trigoni. Supporting Search and Rescue Operations with UA Vs. In 2010 International Conference on Emerging Security Technolo- gies, pages 142–147, Canterbury, UK, 2010. doi: 10.1109/est.2010.31
-
[43]
LSAR: Multi- UA V Collaboration for Search and Rescue Missions
Ebtehal Turki Alotaibi, Shahad Saleh Alqefari, and Anis Koubaa. LSAR: Multi- UA V Collaboration for Search and Rescue Missions. IEEE Access, 7:55817– 55832, 2019. doi: 10.1109/ACCESS.2019.2912306
-
[44]
Schneidereit, S
T. Schneidereit, S. Gohrenz, and M. Breuß. Object Detection Characteristics in a Learning Factory Environment using YOLOv8. In Kohei Arai, editor, Intelligent Systems and Applications , pages 288–308, Cham, 2025. Springer Nature Switzerland
2025
-
[45]
Jawad N Yasin, Sherif AS Mohamed, Mohammad-Hashem Haghbayan, Jukka Heikkonen, Hannu Tenhunen, and Juha Plosila. Unmanned Aerial Vehicles (UA Vs): Collision A voidance Systems and Approaches.IEEE Access, 8:105139– 105155, 2020. doi: 10.1109/access.2020.3000064
-
[46]
Au- tonomous Vehicle Detection System using Visible and Infrared Camera
Jisu Kim, Sungjun Hong, Jeonghyun Baek, Euntai Kim, and Heejin Lee. Au- tonomous Vehicle Detection System using Visible and Infrared Camera. In 2012 12th International Conference on Control, Automation and Systems , pages 630– 634, Jeju, Korea (South), 2012
2012
-
[47]
Obstacle Awareness and Collision A voidance Radar Sensor System for Low-Altitude Flying Smart UA V
Young K Kwag and Jung W Kang. Obstacle Awareness and Collision A voidance Radar Sensor System for Low-Altitude Flying Smart UA V. In The 23rd Digital Avionics Systems Conference (IEEE Cat. No. 04CH37576) , volume 2, pages 12–D, Salt Lake City, UT, USA, 2004. doi: 10.1109/DASC.2004.1390837
-
[48]
Yolo v4, v3 and v2 for Windows and Linux, n.d
Alexey Bochkovskiy. Yolo v4, v3 and v2 for Windows and Linux, n.d. URL https://github.com/AlexeyAB/darknet. (Accessed: 2025-10-01)
2025
-
[49]
Darknet: Open Source Neural Networks in C, 2013–2016
Joseph Redmon. Darknet: Open Source Neural Networks in C, 2013–2016. URL http://pjreddie.com/darknet/. (Accessed: 2025-10-01)
2013
-
[50]
Mohsen Khan Mohammadi, Toni Schneidereit, Ashkan Mansouri Yarahmadi, and Michael Breuß. Investigating Training Datasets of Real and Synthetic Images for Outdoor Swimmer Localisation with YOLO. AI, 5(2):576–593, 2024. doi: 10.3390/ai5020030. 33
-
[51]
Mark Everingham, S. M. Eslami, Luc Gool, Christopher K. Williams, John Winn, and Andrew Zisserman. The Pascal Visual Object Classes Challenge: A Retrospective. Int. J. Comput. Vision , 111(1):98––136, jan 2015. doi: 10.1007/ s11263-014-0733-5
2015
-
[52]
YOLOv3: An Incremental Improvement, 2018
Joseph Redmon and Ali Farhadi. YOLOv3: An Incremental Improvement, 2018
2018
-
[53]
Chengyang Wang and Caiming Zhong. Adaptive Feature Pyramid Networks for Object Detection. IEEE Access, 9:107024–107032, 2021. doi: 10.1109/access. 2021.3100369
-
[54]
Tsung-Yi Lin, Michael Maire, Serge Belongie, James Hays, Pietro Perona, Deva Ramanan, Piotr Dollár, and C. Lawrence Zitnick. Microsoft COCO: Common Objects in Context. In David Fleet, Tomas Pajdla, Bernt Schiele, and Tinne Tuytelaars, editors, Computer Vision – ECCV 2014 , pages 740–755, Switzer- land, 2014. Springer International Publishing. doi: 10.1007...
-
[55]
YOLO9000: Better, Faster, Stronger
Joseph Redmon and Ali Farhadi. YOLO9000: Better, Faster, Stronger. 2017 IEEE Conference on Computer Vision and Pattern Recognition (CVPR) , pages 6517–6525, 2016. doi: 10.1109/CVPR.2017.690
-
[56]
ultra- lytics/yolov5: v6.2 - YOLOv5 Classification Models, Apple M1, Reproducibil- ity, ClearML and Deci.ai integrations, August 2022
Glenn Jocher, Ayush Chaurasia, Alex Stoken, Jirka Borovec, NanoCode012, Yonghye Kwon, TaoXie, Kalen Michael, Jiacong Fang, Imyhxy, Lorna, Colin Wong, Zeng Yifu, Abhiram V, Diego Montes, Zhiqiang Wang, Cristi Fati, Je- bastin Nadar, Laughing, UnglvKitDe, Tkianai, YxNONG, Piotr Skalski, Adam Hogan, Max Strobel, Mrinal Jain, Lorenzo Mammana, and Xylieong. ul...
2022
-
[57]
YOLOv5 by Ultralytics, 2020
Glenn Jocher. YOLOv5 by Ultralytics, 2020. URL https://github.com/ ultralytics/yolov5
2020
-
[58]
CSPNet: A New Backbone that can Enhance Learning Capability of CNN
Chien-Yao Wang, Hong-Yuan Mark Liao, Yueh-Hua Wu, Ping-Yang Chen, Jun- Wei Hsieh, and I-Hau Yeh. CSPNet: A New Backbone that can Enhance Learning Capability of CNN. In 2020 IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops (CVPR W), Seattle, W A, USA, June
2020
-
[59]
Trevor Hastie, Andrea Montanari, Saharon Rosset, and Ryan J
IEEE. doi: 10.1109/cvprw50498.2020.00203
-
[60]
Path Aggregation Network for Instance Segmentation
Shu Liu, Lu Qi, Haifang Qin, Jianping Shi, and Jiaya Jia. Path Aggregation Network for Instance Segmentation. In 2018 IEEE/CVF Conference on Com- puter Vision and Pattern Recognition , Salt Lake City, UT, USA, June 2018. IEEE. doi: 10.1109/cvpr.2018.00913
-
[61]
Ultralytics YOLO, January
Glenn Jocher, Ayush Chaurasia, and Jing Qiu. Ultralytics YOLO, January
-
[62]
(Accessed: 2025-10-01)
URL https://github.com/ultralytics/ultralytics. (Accessed: 2025-10-01). 34
2025
-
[63]
Deep Residual Learning for Image Recognition , isbn =
Kaiming He, Xiangyu Zhang, Shaoqing Ren, and Jian Sun. Deep Residual Learning for Image Recognition. In 2016 IEEE Conference on Computer Vision and Pattern Recognition (CVPR) , Las Vegas, NV, USA, June 2016. IEEE. doi: 10.1109/cvpr.2016.90
-
[64]
The European Union General Data Protection Regulation: What it is and what it means
Chris Jay Hoofnagle, Bart van der Sloot, and Frederik Zuiderveen Borgesius. The European Union General Data Protection Regulation: What it is and what it means. SSRN Electronic Journal , 2018. doi: 10.2139/ssrn.3254511
-
[65]
Swimmers Dataset, 2021
Sean Coughlin. Swimmers Dataset, 2021. URL https://www.kaggle.com/ datasets/seanmc4/swimmers. (Accessed: 2025-01-01)
2021
-
[66]
YOLO-Based Object Detection in Industry 4.0 Fischertechnik Model Environment , pages 1––20
Slavomira Schneidereit, Ashkan Mansouri Yarahmadi, Toni Schneidereit, Michael Breuß, and Marc Gebauer. YOLO-Based Object Detection in Industry 4.0 Fischertechnik Model Environment , pages 1––20. Springer Nature, Switzer- land, 2024. doi: 10.1007/978-3-031-47724-9\_1
-
[67]
Integrating Expected Coverage and Local Reliability for Emergency Medical Services Location Problems
Paul Sorensen and Richard Church. Integrating Expected Coverage and Local Reliability for Emergency Medical Services Location Problems. Socio-Economic Planning Sciences, 44(1):8–18, 2010. doi: 10.1016/j.seps.2009.04.002
-
[68]
A Nested- Compliance Table Policy for Emergency Medical Service Systems under Re- location
Kanchala Sudtachat, Maria E Mayorga, and Laura A Mclay. A Nested- Compliance Table Policy for Emergency Medical Service Systems under Re- location. Omega, 58:154–168, 2016. doi: 10.1016/j.omega.2015.06.001
-
[69]
A Review on Simulation Models applied to Emergency Medical Service Operations
Lina Aboueljinane, Evren Sahin, and Zied Jemai. A Review on Simulation Models applied to Emergency Medical Service Operations. Computers & In- dustrial Engineering, 66(4):734–750, 2013. doi: 10.1016/j.cie.2013.09.017
-
[70]
Sascha Zell, Toni Schneidereit, Armin Fügenschuh, and Michael Breuß. Ad- vanced Search and Rescue Operations for Drowning Swimmers using Au- tonomous Unmanned Aircraft Systems: Location Optimization, Flight Tra- jectory Planning and Image-based Localization, 2024
2024
-
[71]
Rafael Padilla, Sergio L. Netto, and Eduardo A. B. da Silva. A Survey on Per- formance Metrics for Object-Detection Algorithms. In 2020 International Con- ference on Systems, Signals and Image Processing (IWSSIP) , Niterói, Brazil, July 2020. IEEE. doi: 10.1109/iwssip48289.2020.9145130
-
[72]
Andrew B. Watson. High Frame Rates and Human Vision: A View through the Window of Visibility. SMPTE Motion Imaging Journal , 122(2):18––32, March
-
[73]
doi: 10.5594/j18266xy
-
[74]
Newcombe, Adrien Angeli, and Andrew J
Ankur Handa, Richard A. Newcombe, Adrien Angeli, and Andrew J. Davison. Real-Time Camera Tracking: When is High Frame-Rate Best? In Computer Vision – ECCV 2012 , pages 222––235, Florence, Italy, 2012. Springer. doi: 10.1007/978-3-642-33786-4\_17
-
[75]
Statistical analysis on stiefel and grassmann manifolds with applications in computer vision
Konrad Schindler and Luc van Gool. Action Snippets: How Many Frames does Human Action Recognition Require? In 2008 IEEE Conference on Computer 35 Vision and Pattern Recognition , Anchorage, Alaska, USA, June 2008. IEEE. doi: 10.1109/cvpr.2008.4587730
-
[76]
E. W. Dijkstra. A Note on two Problems in Connexion with Graphs. Nu- merische Mathematik , 1(1):269–271, 1959. doi: 10.1007/BF01386390
-
[77]
Driving Speeds in Urgent and Non- urgent Ambulance Missions during Normal and Reduced Winter Speed Limit Periods—A Descriptive Study
Jukka Pappinen and Hilla Nordquist. Driving Speeds in Urgent and Non- urgent Ambulance Missions during Normal and Reduced Winter Speed Limit Periods—A Descriptive Study. Nursing Reports , 12(1):50–58, 2022. doi: 10. 3390/nursrep12010006
2022
-
[78]
Proximity and Reachability in the Plane
Der-Tsai Lee. Proximity and Reachability in the Plane. University of Illinois at Urbana-Champaign, 1978
1978
-
[79]
DIN 14961:2025-09 – Boote für die Feuerwehr, 09 2025
DIN Deutsches Institut für Normung e.V. DIN 14961:2025-09 – Boote für die Feuerwehr, 09 2025. URL https://www.dinmedia.de/de/norm/din-14961/ 392911277. Beuth Verlag, Berlin. (Accessed: 2025-10-21)
2025
-
[80]
Starkes Boot für starke Strömung, 09
Blick aktuell. Starkes Boot für starke Strömung, 09
discussion (0)
Sign in with ORCID, Apple, or X to comment. Anyone can read and Pith papers without signing in.