pith. machine review for the scientific record. sign in

arxiv: 2602.20028 · v3 · submitted 2026-02-20 · 💻 cs.CV · cs.AI

Recognition: 2 theorem links

· Lean Theorem

Descriptor: Parasitoid Wasps and Associated Hymenoptera Dataset (DAPWH)

Authors on Pith no claims yet

Pith reviewed 2026-05-15 20:43 UTC · model grok-4.3

classification 💻 cs.CV cs.AI
keywords Parasitoid waspsIchneumonidaeBraconidaeImage datasetCOCO annotationsComputer visionHymenopteraBiodiversity monitoring
0
0 comments X

The pith

A dataset of 3,556 high-resolution images with 1,739 COCO annotations supports training computer vision models to identify Neotropical parasitoid wasps.

A machine-rendered reading of the paper's core claim, the machinery that carries it, and where it could break.

The paper introduces a curated collection of 3,556 high-resolution images centered on Neotropical Ichneumonidae and Braconidae parasitoid wasps, with supplementary images from nine other Hymenoptera families added to increase variety. A subset of 1,739 images carries COCO-format bounding-box annotations that separately mark the full insect body, wing venation, and scale bars. These wasps are ecologically important for controlling insect populations yet remain difficult to identify because of cryptic morphology and the large number of undescribed species. The authors position the dataset as a practical resource that can accelerate the development of automated identification systems for biodiversity monitoring and agricultural applications.

Core claim

The authors present the Parasitoid Wasps and Associated Hymenoptera Dataset (DAPWH) containing 3,556 high-resolution images focused primarily on Neotropical Ichneumonidae and Braconidae, supplemented by images from Andrenidae, Apidae, Bethylidae, Chrysididae, Colletidae, Halictidae, Megachilidae, Pompilidae, and Vespidae. Of these, 1,739 images are supplied with multi-class COCO annotations that provide bounding boxes for the complete insect body, wing venation, and scale bars. The resource is offered specifically to enable computer vision models that perform automated family-level identification of these taxonomically challenging parasitoid groups.

What carries the argument

The DAPWH dataset's COCO-annotated subset, which supplies multi-class bounding boxes for the full insect body, wing venation, and scale bars so that detection and classification models can learn diagnostic morphological features.

If this is right

  • Models trained on the dataset can localize and classify parasitoid wasps to family level using visible body and wing features.
  • Separate annotations for wing venation allow models to exploit a key morphological character used in traditional taxonomy.
  • Inclusion of multiple Hymenoptera families improves the ability of models to distinguish target groups from similar-looking insects.
  • The dataset supplies training material that can scale identification beyond the capacity of manual taxonomic work in the Neotropics.
  • Better automated recognition of parasitoids supports more precise tracking of natural enemies in agricultural pest management.

Where Pith is reading between the lines

These are editorial extensions of the paper, not claims the author makes directly.

  • Adding species-level labels to a portion of the images would test whether the same annotation approach supports finer taxonomic resolution.
  • The dataset could be combined with citizen-science photographs to check how well models generalize to lower-quality field images.
  • Similar annotation protocols could be applied to other under-documented insect superfamilies to create comparable training resources.
  • Public release of the dataset may encourage joint projects between taxonomists and machine-learning groups to refine diagnostic features.

Load-bearing premise

The images have been correctly identified to family level by taxonomic experts and the selected annotation scheme plus image diversity are sufficient to train reliable identification models.

What would settle it

Train an object-detection model on the 1,739 annotated images and evaluate it on an independent set of expert-verified wasp photographs collected outside the dataset; a sharp drop in precision or recall for body or wing localization would falsify the claim that the annotations support robust automated identification.

Figures

Figures reproduced from arXiv: 2602.20028 by Alvaro Doria Dos Santos, Angelica Maria Penteado-Dias, Eduardo A. B. Almeida, Gabriela Do Nascimento Herrera, Helena Carolina Onody, Joao Manoel Herrera Pinheiro, Luciana Bueno Dos Reis Fernandes, Marcelo Andrade Da Costa Vieira, Marcelo Becker, Ricardo V. Godoy.

Figure 1
Figure 1. Figure 1: FIGURE 1 [PITH_FULL_IMAGE:figures/full_fig_p002_1.png] view at source ↗
Figure 2
Figure 2. Figure 2: FIGURE 2 [PITH_FULL_IMAGE:figures/full_fig_p003_2.png] view at source ↗
Figure 3
Figure 3. Figure 3: FIGURE 3 [PITH_FULL_IMAGE:figures/full_fig_p003_3.png] view at source ↗
Figure 4
Figure 4. Figure 4: FIGURE 4 [PITH_FULL_IMAGE:figures/full_fig_p004_4.png] view at source ↗
Figure 5
Figure 5. Figure 5: FIGURE 5 [PITH_FULL_IMAGE:figures/full_fig_p005_5.png] view at source ↗
Figure 6
Figure 6. Figure 6: FIGURE 6 [PITH_FULL_IMAGE:figures/full_fig_p006_6.png] view at source ↗
Figure 7
Figure 7. Figure 7: FIGURE 7 [PITH_FULL_IMAGE:figures/full_fig_p006_7.png] view at source ↗
Figure 8
Figure 8. Figure 8: FIGURE 8 [PITH_FULL_IMAGE:figures/full_fig_p008_8.png] view at source ↗
Figure 9
Figure 9. Figure 9: FIGURE 9 [PITH_FULL_IMAGE:figures/full_fig_p008_9.png] view at source ↗
Figure 10
Figure 10. Figure 10: FIGURE 10 [PITH_FULL_IMAGE:figures/full_fig_p009_10.png] view at source ↗
Figure 11
Figure 11. Figure 11: FIGURE 11 [PITH_FULL_IMAGE:figures/full_fig_p010_11.png] view at source ↗
read the original abstract

Accurate taxonomic identification is the cornerstone of biodiversity monitoring and agricultural management, particularly for the hyper-diverse superfamily Ichneumonoidea. Comprising the families Ichneumonidae and Braconidae, these parasitoid wasps are ecologically critical for regulating insect populations, yet they remain one of the most taxonomically challenging groups due to their cryptic morphology and vast number of undescribed species. To address the scarcity of robust digital resources for these key groups, we present a curated image dataset designed to advance automated identification systems. The dataset contains 3,556 high-resolution images, primarily focused on Neotropical Ichneumonidae and Braconidae, while also including supplementary families such as Andrenidae, Apidae, Bethylidae, Chrysididae, Colletidae, Halictidae, Megachilidae, Pompilidae, and Vespidae to improve model robustness. Crucially, a subset of 1,739 images is annotated in COCO format, featuring multi-class bounding boxes for the full insect body, wing venation, and scale bars. This resource provides a foundation for developing computer vision models capable of identifying these families.

Editorial analysis

A structured set of objections, weighed in public.

Desk editor's note, referee report, simulated authors' rebuttal, and a circularity audit. Tearing a paper down is the easy half of reading it; the pith above is the substance, this is the friction.

Referee Report

2 major / 2 minor

Summary. The manuscript presents the DAPWH dataset, a collection of 3,556 high-resolution images focused on Neotropical Ichneumonidae and Braconidae parasitoid wasps, supplemented with images from nine other Hymenoptera families. A subset of 1,739 images is annotated in COCO format with multi-class bounding boxes covering the full insect body, wing venation, and scale bars. The stated goal is to provide a foundation for computer vision models that perform automated family-level identification of these taxonomically challenging groups.

Significance. If the family-level labels prove accurate and the annotations are consistent, the dataset would address a genuine scarcity of public, high-resolution resources for training identification models on Ichneumonoidea. The multi-class bounding boxes and inclusion of supplementary families could support more robust feature learning than single-class datasets. However, the lack of any reported validation for labels or annotations means the resource cannot yet be treated as a verified benchmark.

major comments (2)
  1. [Abstract / Dataset Description] Abstract and Dataset Description: The claim that the dataset is 'curated' and supplies a 'reliable foundation' for automated identification rests on the accuracy of family-level labels, yet the manuscript provides no information on who performed the identifications, what keys or reference collections were used, whether any specimens received molecular confirmation, or any validation protocol. This information is load-bearing for the central claim.
  2. [Annotation Details] Annotation section: No details are given on the creation of the 1,739 COCO annotations, including annotator expertise, guidelines for placing bounding boxes around wing venation and scale bars, or any quality metrics such as inter-annotator agreement. Without these, the consistency of the multi-class labels cannot be evaluated.
minor comments (2)
  1. [Introduction] The manuscript would benefit from a brief comparison table placing DAPWH against existing public Hymenoptera image datasets in terms of image count, resolution, annotation type, and taxonomic coverage.
  2. [Methods] Image acquisition metadata (camera model, lighting conditions, specimen mounting) is mentioned only in passing; a short table summarizing these parameters would improve reproducibility.

Simulated Author's Rebuttal

2 responses · 0 unresolved

We thank the referee for their constructive feedback on the DAPWH dataset manuscript. The comments correctly identify areas where additional documentation is needed to support the dataset's utility for computer vision research. We address each point below and have revised the manuscript to provide greater transparency on identification and annotation processes.

read point-by-point responses
  1. Referee: [Abstract / Dataset Description] Abstract and Dataset Description: The claim that the dataset is 'curated' and supplies a 'reliable foundation' for automated identification rests on the accuracy of family-level labels, yet the manuscript provides no information on who performed the identifications, what keys or reference collections were used, whether any specimens received molecular confirmation, or any validation protocol. This information is load-bearing for the central claim.

    Authors: We agree that the original manuscript did not sufficiently document the labeling process, which is necessary to evaluate the dataset's reliability. In the revised version, we have added a dedicated subsection on data curation and labeling. Family-level identifications were performed by the authors using standard morphological keys for Neotropical Ichneumonoidea and associated Hymenoptera families, with reference to institutional collections for verification on a subset of specimens. No molecular barcoding was performed across the dataset owing to resource limitations; this is now explicitly stated as a limitation. We have also moderated the abstract language from 'reliable foundation' to 'foundation' to align with the available validation. revision: yes

  2. Referee: [Annotation Details] Annotation section: No details are given on the creation of the 1,739 COCO annotations, including annotator expertise, guidelines for placing bounding boxes around wing venation and scale bars, or any quality metrics such as inter-annotator agreement. Without these, the consistency of the multi-class labels cannot be evaluated.

    Authors: We acknowledge the absence of annotation methodology details in the submitted manuscript. The revised Annotation section now specifies that annotations were carried out by researchers with combined expertise in entomology and image annotation. Guidelines required bounding boxes to enclose the complete insect body, delineate visible wing venation structures, and incorporate scale bars. A primary annotator created the labels with subsequent review by a second team member for obvious errors, although quantitative inter-annotator agreement statistics were not computed. These procedural details and any quality-control steps have been incorporated into the revised manuscript. revision: yes

Circularity Check

0 steps flagged

Dataset descriptor paper contains no derivations, predictions or self-referential claims

full rationale

The manuscript is a straightforward descriptive release of a curated image dataset (3,556 images, 1,739 COCO-annotated). It presents no equations, no fitted parameters, no predictions, and no derivation chain. The central claim is simply that the dataset exists with the stated contents and annotation format; this claim is not derived from any prior result within the paper or via self-citation. No load-bearing step reduces to its own inputs by construction. The absence of any mathematical or predictive content makes circularity analysis inapplicable; the paper is self-contained as a data descriptor.

Axiom & Free-Parameter Ledger

0 free parameters · 1 axioms · 0 invented entities

The central claim rests on the existence of a curated, accurately labeled image collection; no free parameters or invented entities are introduced. The only non-trivial assumption is expert taxonomic accuracy of the labels.

axioms (1)
  • domain assumption Specimens have been correctly identified to family level by taxonomic experts.
    Label accuracy is required for the dataset to serve as reliable training data for identification models.

pith-pipeline@v0.9.0 · 5556 in / 1420 out tokens · 40284 ms · 2026-05-15T20:43:42.793760+00:00 · methodology

discussion (0)

Sign in with ORCID, Apple, or X to comment. Anyone can read and Pith papers without signing in.

Lean theorems connected to this paper

Citations machine-checked in the Pith Canon. Every link opens the source theorem in the public Lean library.

What do these tags mean?
matches
The paper's claim is directly supported by a theorem in the formal canon.
supports
The theorem supports part of the paper's argument, but the paper may add assumptions or extra steps.
extends
The paper goes beyond the formal theorem; the theorem is a base layer rather than the whole result.
uses
The paper appears to rely on the theorem as machinery.
contradicts
The paper's claim conflicts with a theorem or certificate in the canon.
unclear
Pith found a possible connection, but the passage is too broad, indirect, or ambiguous to say the theorem truly supports the claim.

Reference graph

Works this paper leans on

40 extracted references · 40 canonical work pages · 6 internal anchors

  1. [1]

    J. T. Huber,Biodiversity of Hymenoptera. John Wiley & Sons, Ltd, 2017, ch. 12, pp. 419–461. [Online]. Available: https://onlinelibrary.wiley.com/doi/abs/10.1002/9781118945568.ch12

  2. [2]

    New approaches narrow global species estimates for beetles, insects, and terrestrial arthropods,

    N. E. Stork, J. McBroom, C. Gely, and A. J. Hamilton, “New approaches narrow global species estimates for beetles, insects, and terrestrial arthropods,”Proceedings of the National Academy of Sciences, vol. 112, no. 24, pp. 7519–7523, 2015. [Online]. Available: https://www.pnas.org/doi/abs/10.1073/pnas.1502408112

  3. [3]

    How many species of insects and other terrestrial arthropods are there on earth?

    N. E. Stork, “How many species of insects and other terrestrial arthropods are there on earth?”Annual Review of Entomology, vol. 63, no. V olume 63, 2018, pp. 31–45, 2018. [Online]. Available: https://www.annualreviews.org/content/journals/ 10.1146/annurev-ento-020117-043348

  4. [4]

    Quantifying the unquantifiable: why hymenoptera, not coleoptera, is the most speciose animal order,

    A. A. Forbes, R. K. Bagley, M. A. Beer, A. C. Hippee, and H. A. Widmayer, “Quantifying the unquantifiable: why hymenoptera, not coleoptera, is the most speciose animal order,” 8 VOLUME 00, 2024 FIGURE 10. Confusion matrix normalized for YOLOv12. BMC Ecology, vol. 18, no. 1, p. 21, 2018. [Online]. Available: https://doi.org/10.1186/s12898-018-0176-x

  5. [5]

    The state of the world’s insects,

    P. Eggleton, “The state of the world’s insects,”Annual Review of Environment and Resources, vol. 45, no. V olume 45, 2020, pp. 61–82, 2020. [Online]. Available: https://www.annualreviews.org/ content/journals/10.1146/annurev-environ-012420-050035

  6. [6]

    How many flowering plants are pollinated by animals?

    J. Ollerton, R. Winfree, and S. Tarrant, “How many flowering plants are pollinated by animals?”Oikos, vol. 120, no. 3, pp. 321–326,

  7. [7]

    Information-theoretic security without an honest majority

    [Online]. Available: https://nsojournals.onlinelibrary.wiley.com/ doi/abs/10.1111/j.1600-0706.2010.18644.x

  8. [8]

    Worldwide decline of the entomofauna: A review of its drivers,

    F. S ´anchez-Bayo and K. A. Wyckhuys, “Worldwide decline of the entomofauna: A review of its drivers,”Biological Conservation, vol. 232, pp. 8–27, 2019. [Online]. Available: https://www.sciencedirect. com/science/article/pii/S0006320718313636

  9. [9]

    Vesp´ıdeos (Hymenoptera, Vespidae) vetores de p ´olen de Schinus terebinthifolius Raddi (Anacardiaceae), Santa Cruz do Sul, RS, Brasil,

    R. Barbizan S ¨uhs, A. Somavilla, A. K ¨ohler, and J. Putzke, “Vesp´ıdeos (Hymenoptera, Vespidae) vetores de p ´olen de Schinus terebinthifolius Raddi (Anacardiaceae), Santa Cruz do Sul, RS, Brasil,”Brazilian Journal of Biosciences, vol. 7, pp. 138–143, 04 2009

  10. [10]

    Ecological effects and management of invasive alien vespidae,

    J. R. Beggs, E. G. Brockerhoff, J. C. Corley, M. Kenis, M. Masciocchi, F. Muller, Q. Rome, and C. Villemant, “Ecological effects and management of invasive alien vespidae,”BioControl, vol. 56, no. 4, pp. 505–526, 2011. [Online]. Available: https: //doi.org/10.1007/s10526-011-9389-z

  11. [11]

    Gauld and B

    I. Gauld and B. Bolton,The Hymenoptera. British Museum (Natural History), 1988

  12. [12]

    Quicke,The Braconid and Ichneumonid Parasitoid Wasps: Biology, Systematics, Evolution and Ecology

    D. Quicke,The Braconid and Ichneumonid Parasitoid Wasps: Biology, Systematics, Evolution and Ecology. Wiley, 2015

  13. [13]

    Broad, M

    G. Broad, M. Shaw, and M. Fitton,Ichneumonid Wasps (Hymenoptera: Ichneumonidae): Their Classification and Biology, ser. Entomology Series. CABI, 2018

  14. [14]

    Butcher and D

    B. Butcher and D. Quicke,The Parasitoid Wasps of South East Asia. CAB International, 2023

  15. [15]

    Biodiversity of hymenopteran parasitoids,

    A. Polaszek and L. Vilhemsen, “Biodiversity of hymenopteran parasitoids,”Current Opinion in Insect Science, vol. 56, p. 101026,

  16. [16]

    Available: https://www.sciencedirect.com/science/ article/pii/S2214574523000238

    [Online]. Available: https://www.sciencedirect.com/science/ article/pii/S2214574523000238

  17. [17]

    An ´alise faun ´ıstica de Bra- conidae (Hymenoptera) em tr ˆes ´areas de mata nativa do Estado do Paran´a, Brasil,

    D. Scatolini and A. M. Penteado-Dias, “An ´alise faun ´ıstica de Bra- conidae (Hymenoptera) em tr ˆes ´areas de mata nativa do Estado do Paran´a, Brasil,”Revista Brasileira de Entomologia, vol. 47, no. 2, pp. 187–195, 2003

  18. [18]

    Diversity of parasitoid wasps in conventional and organic guarana (Paullinia cupanavar.sorbilis) cultivation areas in the Brazilian Amazon,

    K. Schoeninger, J. L. P. Souza, C. Krug, and M. L. Oliveira, “Diversity of parasitoid wasps in conventional and organic guarana (Paullinia cupanavar.sorbilis) cultivation areas in the Brazilian Amazon,”Acta Amazonica, vol. 49, no. 4, pp. 283–293, 2019

  19. [19]

    Faunistic analysis of Ichneumonidae (Hymenoptera) in guarana (Paullinia cupana) crop, with new records of genera for the Brazilian Amazon,

    N. T. B. Antunes and D. R. R. Fernandes, “Faunistic analysis of Ichneumonidae (Hymenoptera) in guarana (Paullinia cupana) crop, with new records of genera for the Brazilian Amazon,”Arquivos do Instituto Biol ´ogico, vol. 87, 2020

  20. [20]

    Family Braconidae,

    M. J. Sharkey, “Family Braconidae,” inHymenoptera of the world: An identification guide to families, H. Goulet and J. T. Huber, Eds. Ottawa, Ontario: Center for Land and Biological Resources Research, 1993, pp. 362–395

  21. [21]

    Superfamilia Ichneumonoidea,

    M. J. Sharkey and D. B. Wahl, “Superfamilia Ichneumonoidea,” inIn- troducci´on a los Hymenoptera de la Regi ´on Neotropical, F. Fern´andez and M. J. Sharkey, Eds. Bogot ´a: Sociedad Colombiana de Entomolo- gia: Universidad Nacional de Colombia, 2006, pp. 287–292. VOLUME 00, 2024 9 PINHEIROet al.: IEEE-DATA PARASITOID WASPS AND ASSOCIATED HYMENOPTERA DATAS...

  22. [22]

    Systematics, phylogeny, and evolution of braconid wasps: 30 years of progress,

    X.-x. Chen and C. van Achterberg, “Systematics, phylogeny, and evolution of braconid wasps: 30 years of progress,”Annual Review of Entomology, vol. 64, no. V olume 64, 2019, pp. 335–358, 2019. [Online]. Available: https://www.annualreviews.org/content/journals/ 10.1146/annurev-ento-011118-111856

  23. [23]

    Sizing the knowledge gap in taxonomy: The last dozen years of aphidiinae research,

    A. Petrovi ´c, “Sizing the knowledge gap in taxonomy: The last dozen years of aphidiinae research,”Insects, vol. 13, no. 2, 2022. [Online]. Available: https://www.mdpi.com/2075-4450/13/2/170

  24. [24]

    Grand challenges in entomology: Priorities for action in the coming decades,

    S. H. Luke, H. E. Roy, C. D. Thomas, L. A. N. Tilley, S. Ward, A. Watt, M. Carnaghi, C. C. Jaworski, M. P. T. G. Tercel, C. Woodrow, S. Aown, J. A. Banfield-Zanin, S. L. Barnsley, I. Berger, M. J. F. Brown, J. C. Bull, H. Campbell, R. A. B. Carter, M. Charalambous, L. J. Cole, M. J. Ebejer, R. A. Farrow, R. S. Fartyal, M. Grace, F. Highet, J. K. Hill, A. ...

  25. [25]

    Dataset of parasitoid wasps and associated hymenoptera (dapwh),

    J. M. Herrera Pinheiro, G. do Nascimento Herrera, L. Bueno dos Reis Fernandes, A. Doria dos Santos, R. Vilela de Godoy, E. Andrade Botelho de Almeida, H. Carolina Onody, M. Andrade da Costa Vieira, A. Maria Penteado-Dias, and M. Becker, “Dataset of parasitoid wasps and associated hymenoptera (dapwh),” Feb. 2026. [Online]. Available: https://doi.org/10.528...

  26. [26]

    Dcbu - colec ¸ ˜ao taxonˆomica do departamento de ecologia e biologia evolutiva da ufscar,

    A. M. Penteado-Dias and L. B. d. R. Fernandes, “Dcbu - colec ¸ ˜ao taxonˆomica do departamento de ecologia e biologia evolutiva da ufscar,” 2025. [Online]. Available: https://doi.org/10.15468/xzkz3y

  27. [27]

    Illustrated catalogue of type specimens of insects (Hexapoda) at Colec ¸ ˜ao Entomol ´ogica “Prof. J.M.F.Camargo

    E. A. B. Almeida, A. M. Costa, J. A. T. Filho, M. M. P. Zichinelli, and F. B. Quinteiro, “Illustrated catalogue of type specimens of insects (Hexapoda) at Colec ¸ ˜ao Entomol ´ogica “Prof. J.M.F.Camargo” (RPSP), Universidade de S ˜ao Paulo, Brazil,” Zootaxa, vol. 4842, no. 1, p. 1–204, Aug. 2020. [Online]. Available: https://www.mapress.com/zt/article/vie...

  28. [28]

    University of British Columbia - Spencer Entomologi- cal Collection (UBCZ). Version 13.12,

    K. Needham, “University of British Columbia - Spencer Entomologi- cal Collection (UBCZ). Version 13.12,” 2025, accessed via GBIF.org on 2025-12-10

  29. [29]

    Type specimens of Pompiloidea, Thynnoidea and Vespoidea (Hymenoptera) deposited in the Museu de Zoologia da Universidade de S ˜ao Paulo, Brazil,

    T. d. O. Andrade, K. S. Ramos, H. C. Onody, A. D. d. Santos, and C. R. F. Brand ˜ao, “Type specimens of Pompiloidea, Thynnoidea and Vespoidea (Hymenoptera) deposited in the Museu de Zoologia da Universidade de S ˜ao Paulo, Brazil,”Pap ´eis Avulsos de Zoologia, vol. 58, p. e20185839, Aug. 2018. [Online]. Available: https://revistas.usp.br/paz/article/view/144141

  30. [30]

    Computer Vision Annotation Tool (CV AT),

    CV AT.ai Corporation, “Computer Vision Annotation Tool (CV AT),” Nov. 2023. [Online]. Available: https://github.com/cvat-ai/cvat

  31. [31]

    Segment Anything

    A. Kirillov, E. Mintun, N. Ravi, H. Mao, C. Rolland, L. Gustafson, T. Xiao, S. Whitehead, A. C. Berg, W.-Y . Lo, P. Doll ´ar, and R. Girshick, “Segment anything,” 2023. [Online]. Available: https://arxiv.org/abs/2304.02643

  32. [32]

    Microsoft COCO: Common Objects in Context

    T.-Y . Lin, M. Maire, S. Belongie, L. Bourdev, R. Girshick, J. Hays, P. Perona, D. Ramanan, C. L. Zitnick, and P. Doll ´ar, “Microsoft coco: Common objects in context,” 2015. [Online]. Available: https://arxiv.org/abs/1405.0312

  33. [33]

    Model evaluation, model selection, and algorithm selection in machine learning,

    S. Raschka, “Model evaluation, model selection, and algorithm selection in machine learning,” 2020. [Online]. Available: https: //arxiv.org/abs/1811.12808 10 VOLUME 00, 2024

  34. [34]

    Efficientnetv2: Smaller models and faster training,

    M. Tan and Q. V . Le, “Efficientnetv2: Smaller models and faster training,” 2021. [Online]. Available: https://arxiv.org/abs/2104.00298

  35. [35]

    A convnet for the 2020s,

    Z. Liu, H. Mao, C.-Y . Wu, C. Feichtenhofer, T. Darrell, and S. Xie, “A convnet for the 2020s,” 2022. [Online]. Available: https://arxiv.org/abs/2201.03545

  36. [36]

    An Image is Worth 16x16 Words: Transformers for Image Recognition at Scale

    A. Dosovitskiy, L. Beyer, A. Kolesnikov, D. Weissenborn, X. Zhai, T. Unterthiner, M. Dehghani, M. Minderer, G. Heigold, S. Gelly, J. Uszkoreit, and N. Houlsby, “An image is worth 16x16 words: Transformers for image recognition at scale,” 2021. [Online]. Available: https://arxiv.org/abs/2010.11929

  37. [37]

    Very Deep Convolutional Networks for Large-Scale Image Recognition

    K. Simonyan and A. Zisserman, “Very deep convolutional networks for large-scale image recognition,” 2015. [Online]. Available: https://arxiv.org/abs/1409.1556

  38. [38]

    Deep residual learning for image recognition,

    K. He, X. Zhang, S. Ren, and J. Sun, “Deep residual learning for image recognition,” in2016 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), 2016, pp. 770–778

  39. [39]

    YOLOv12: Attention-Centric Real-Time Object Detectors

    Y . Tian, Q. Ye, and D. Doermann, “Yolov12: Attention-centric real-time object detectors,” 2025. [Online]. Available: https://arxiv. org/abs/2502.12524

  40. [40]

    Yolov8: A novel object detection algorithm with enhanced performance and robustness,

    R. Varghese and S. M., “Yolov8: A novel object detection algorithm with enhanced performance and robustness,” in2024 International Conference on Advances in Data Engineering and Intelligent Com- puting Systems (ADICS), 2024, pp. 1–6. VOLUME 00, 2024 11