pith. machine review for the scientific record. sign in

arxiv: 2604.11332 · v1 · submitted 2026-04-13 · 💻 cs.CV · cs.AI

Recognition: unknown

A Compact and Efficient 1.251 Million Parameter Machine Learning CNN Model PD36-C for Plant Disease Detection: A Case Study

Authors on Pith no claims yet

Pith reviewed 2026-05-10 16:20 UTC · model grok-4.3

classification 💻 cs.CV cs.AI
keywords plant disease detectionconvolutional neural networkcompact modeledge deploymentimage classificationsmart agricultureNew Plant Diseases DatasetTensorFlow Keras
0
0 comments X

The pith

A 1.25-million-parameter CNN reaches 99.5% accuracy classifying 38 plant diseases while running on ordinary hardware.

A machine-rendered reading of the paper's core claim, the machinery that carries it, and where it could break.

The paper presents PD36-C, a deliberately small convolutional neural network built for classifying images of plant leaves into 38 disease and healthy categories. Trained on the New Plant Diseases Dataset of roughly 87,000 images, the model uses only 1.25 million parameters yet records 99.7% training accuracy by epoch 30 and 99.53% average test accuracy. A companion Qt-based desktop application allows offline inference on commodity laptops or desktops. The authors argue that these numbers demonstrate how careful architecture choices plus a well-curated dataset let compact networks compete with much larger models in agricultural settings.

Core claim

PD36-C is a compact CNN with 1,250,694 parameters and a 4.77 MB footprint that attains 0.9953 average test accuracy across 38 classes on the New Plant Diseases Dataset, with several classes reaching perfect precision and recall of 1.0 while the lowest-performing class still exceeds 0.96 recall.

What carries the argument

PD36-C, a custom convolutional neural network whose layer count, filter sizes, and pooling strategy are chosen to minimize parameter count while preserving accuracy on leaf-image inputs.

If this is right

  • Farmers and agronomists can run high-accuracy disease checks on standard laptops without cloud access or specialized GPUs.
  • The same design principles can be reused to create similarly compact models for other image-based agricultural tasks such as weed identification or ripeness grading.
  • Deployment cost and power draw drop sharply compared with models that contain tens of millions of parameters.
  • Per-class metrics near 1.0 on many categories show that the architecture avoids catastrophic failure on any single disease type.

Where Pith is reading between the lines

These are editorial extensions of the paper, not claims the author makes directly.

  • Extending the model with simple domain-adaptation layers could address the performance drop the authors themselves flag for adverse weather and multi-disease leaves.
  • The low parameter count makes on-device fine-tuning on new regional datasets feasible, potentially allowing local customization without retraining from scratch.
  • Because the desktop application already runs offline, the same pipeline could be ported to mobile phones or single-board computers with only modest further compression.

Load-bearing premise

The New Plant Diseases Dataset already contains enough visual variety to stand in for real farm conditions.

What would settle it

Accuracy measured on a fresh collection of field photographs taken under uncontrolled lighting, angles, and weather, or on leaves showing two diseases at once, would drop below 0.95 if the assumption fails.

Figures

Figures reproduced from arXiv: 2604.11332 by Shkelqim Sherifi.

Figure 6
Figure 6. Figure 6: 1) BACKBONE FEATURE EXTRACTOR. The backbone progressively extracts hierarchical features from the input image (𝐼 ∈ 𝑅ଶଶସ×ଶଶସ×ଷ) through five convolutional blocks: Fig. 7a PD36-C Block 1 [PITH_FULL_IMAGE:figures/full_fig_p006_6.png] view at source ↗
read the original abstract

Deep learning has markedly advanced image based plant disease diagnosis as improved hardware and dataset quality have enabled increasingly accurate neural network models. This paper presents PD36 C, a compact convolutional neural network (1,250,694 parameters and 4.77 MB) for plant disease classification. Trained with TensorFlow Keras on the New Plant Diseases Dataset (87k images, 38 classes), PD36 C is designed for robustness and edge deployability, complemented by a Qt for Python desktop application that offers an intuitive GUI and offline inference on commodity hardware. Across experiments, training accuracy reached 0.99697 by epoch 30, and average test accuracy was 0.9953 across 38 classes. Per class performance is uniformly high; on the lower end, Corn (maize) Cercospora leaf spot achieved precision around 0.9777 and recall around 0.9634, indicating occasional confusion with visually similar categories, while on the upper end numerous classes including Apple Black rot, Cedar apple rust, Blueberry healthy, Cherry Powdery mildew, Cherry healthy, and all four grape categories achieved perfect precision 1.00 and recall of 1.00, indicating no false positives and strong coverage. These results show that with a well curated dataset and careful architectural design, small CNNs can achieve competitive accuracy compared with recent baselines while remaining practical for edge scenarios. We also note typical constraints such as adverse weather, low quality imagery, and leaves exhibiting multiple concurrent diseases that can degrade performance and warrant future work on domain robustness. Overall, PD36 C and its application pipeline contribute a field ready, efficient solution for AI assisted plant disease detection in smart agriculture.

Editorial analysis

A structured set of objections, weighed in public.

Desk editor's note, referee report, simulated authors' rebuttal, and a circularity audit. Tearing a paper down is the easy half of reading it; the pith above is the substance, this is the friction.

Referee Report

3 major / 3 minor

Summary. The manuscript introduces PD36-C, a compact CNN with 1,250,694 parameters (4.77 MB) for classifying plant diseases in leaf images. Trained on the New Plant Diseases Dataset (87k images, 38 classes) using TensorFlow/Keras, it reports 0.99697 training accuracy by epoch 30 and 0.9953 average test accuracy, with per-class precision/recall near-perfect for most categories and a floor of ~0.9777/0.9634 for Corn Cercospora leaf spot. A Qt-based desktop GUI is provided for offline inference. The central claim is that careful architectural design enables small CNNs to reach competitive accuracy on this dataset while remaining practical for edge deployment in smart agriculture, though the authors note degradation risks from weather, image quality, and concurrent diseases.

Significance. If the high test accuracy proves reproducible and generalizes, the work would show that sub-2M-parameter CNNs can deliver near-99% accuracy on curated plant-disease imagery with a footprint suitable for commodity hardware, lowering barriers to on-device AI in agriculture. The accompanying application pipeline further supports practical deployment. However, without demonstrated gains over baselines or validation outside the standard split, the incremental significance for the CV community remains modest.

major comments (3)
  1. Abstract and Results: The claim that PD36-C achieves 'competitive accuracy compared with recent baselines' is unsupported by any quantitative table or direct comparison (parameter counts, accuracies, or FLOPs of alternatives such as MobileNetV2, EfficientNet-B0, or prior plant-disease CNNs). This omission is load-bearing for the efficiency-accuracy contribution.
  2. Methodology: No description is given of the train/test split ratio, data-augmentation pipeline, hyperparameter search, or optimizer settings used to obtain the 0.9953 test accuracy. Reproducibility of the reported metrics therefore cannot be verified, weakening support for the central performance claim.
  3. Discussion and deployment claims: Edge practicality is asserted solely from model size and the Qt GUI; no inference-latency, memory-footprint, or power measurements on target edge hardware (e.g., Raspberry Pi, mobile SoC) are reported, nor are any out-of-distribution tests on field images with varying lighting, angles, or concurrent diseases. The abstract itself flags these factors as performance degraders, so the generalization half of the claim rests on an untested assumption.
minor comments (3)
  1. Title vs. abstract: model name appears as 'PD36-C' in the title but 'PD36 C' in the abstract; standardize.
  2. Parameter count: listed as '1.251 Million' in the title but '1,250,694' in the abstract; adopt consistent scientific notation.
  3. Per-class metrics are described in text only; a compact table or confusion-matrix figure would improve readability and allow readers to assess error patterns directly.

Simulated Author's Rebuttal

3 responses · 2 unresolved

We thank the referee for the constructive and detailed feedback on our manuscript. The comments have highlighted important areas where additional clarity and support for our claims are needed. We address each major comment point by point below, indicating the revisions we will make to the manuscript.

read point-by-point responses
  1. Referee: Abstract and Results: The claim that PD36-C achieves 'competitive accuracy compared with recent baselines' is unsupported by any quantitative table or direct comparison (parameter counts, accuracies, or FLOPs of alternatives such as MobileNetV2, EfficientNet-B0, or prior plant-disease CNNs). This omission is load-bearing for the efficiency-accuracy contribution.

    Authors: We acknowledge that the manuscript does not include a direct quantitative comparison table, which is necessary to fully support the claim of competitive accuracy. In the revised manuscript, we will add a new table in the Results section comparing PD36-C with MobileNetV2, EfficientNet-B0, and other relevant models from the plant disease detection literature. The table will report parameter counts, model sizes, accuracies on the New Plant Diseases Dataset, and FLOPs where available from the literature or our implementations. This addition will directly substantiate the efficiency-accuracy contribution. revision: yes

  2. Referee: Methodology: No description is given of the train/test split ratio, data-augmentation pipeline, hyperparameter search, or optimizer settings used to obtain the 0.9953 test accuracy. Reproducibility of the reported metrics therefore cannot be verified, weakening support for the central performance claim.

    Authors: We agree that full methodological details are required to ensure reproducibility. The original manuscript omitted these specifics. In the revised version, we will expand the Methodology section with a dedicated subsection describing the train/test split ratio, the data-augmentation pipeline, the hyperparameter search procedure, and the optimizer settings used to obtain the reported metrics. These additions will enable verification and replication of the results. revision: yes

  3. Referee: Discussion and deployment claims: Edge practicality is asserted solely from model size and the Qt GUI; no inference-latency, memory-footprint, or power measurements on target edge hardware (e.g., Raspberry Pi, mobile SoC) are reported, nor are any out-of-distribution tests on field images with varying lighting, angles, or concurrent diseases. The abstract itself flags these factors as performance degraders, so the generalization half of the claim rests on an untested assumption.

    Authors: We recognize that direct hardware measurements and out-of-distribution tests would provide stronger evidence for the deployment and generalization claims. While the small model size and Qt-based offline GUI support practicality on commodity hardware, we will revise the Discussion section to include a more explicit limitations paragraph. This will acknowledge the absence of specific latency, memory, power, and OOD field tests, while adding estimated FLOPs and memory usage to better contextualize edge suitability. We will also emphasize the curated nature of the dataset and the need for future robustness work, as already noted in the abstract. revision: partial

standing simulated objections not resolved
  • Empirical measurements of inference latency, memory footprint, and power consumption on specific edge hardware such as Raspberry Pi or mobile SoCs.
  • Out-of-distribution testing on real-world field images exhibiting variations in lighting, angles, weather conditions, or concurrent diseases.

Circularity Check

0 steps flagged

No circularity: purely empirical CNN training results

full rationale

The paper reports direct empirical measurements from training and evaluating a CNN (PD36-C) on the external New Plant Diseases Dataset (87k images, 38 classes). Training accuracy 0.99697 and test accuracy 0.9953 are obtained via standard TensorFlow/Keras optimization on the given train/test split; no equations, derivations, predictions, or first-principles results are presented that could reduce to inputs by construction. No self-citations, ansatzes, uniqueness theorems, or fitted parameters renamed as predictions appear in the load-bearing claims. The edge-deployability statement follows from the reported parameter count (1.25M) and model size (4.77 MB) plus measured accuracy, which are independent of any circular reduction. Per the hard rules, this is a self-contained empirical report with no identifiable circular steps.

Axiom & Free-Parameter Ledger

2 free parameters · 1 axioms · 0 invented entities

The central claim rests on empirical training of a custom CNN whose layer counts, filter sizes, and training hyperparameters are chosen by the authors rather than derived. No new physical entities are postulated.

free parameters (2)
  • CNN architecture hyperparameters (layers, filters, kernel sizes)
    Specific design choices for PD36-C that determine the 1.25 million parameter count and are tuned to the dataset.
  • Training schedule (epochs, optimizer settings)
    Values selected to reach the reported 0.99697 training accuracy by epoch 30.
axioms (1)
  • domain assumption Convolutional neural networks are effective for image classification when trained on large labeled datasets.
    Standard assumption underlying all CNN-based plant disease detection work.

pith-pipeline@v0.9.0 · 5610 in / 1463 out tokens · 86486 ms · 2026-05-10T16:20:13.282135+00:00 · methodology

discussion (0)

Sign in with ORCID, Apple, or X to comment. Anyone can read and Pith papers without signing in.

Reference graph

Works this paper leans on

68 extracted references · 58 canonical work pages · 2 internal anchors

  1. [1]

    Identification of disease using deep learning and evaluation of bacteriosis in peach leaf,

    S. Yadav, N. Sengar, A. Singh, A. Singh, and M. K. Dutta, “Identification of disease using deep learning and evaluation of bacteriosis in peach leaf,” Ecol. Inform., vol. 61, p. 101247, Mar. 2021, doi: 10.1016/j.ecoinf.2021.101247

  2. [2]

    Grains – a major source of sustainable protein for health,

    K. S. Poutanen et al., “Grains – a major source of sustainable protein for health,” Nutr. Rev., vol. 80, no. 6, pp. 1648–1663, Jun. 2022, doi: 10.1093/nutrit/nuab084

  3. [3]

    ALBANK - A CASE STUDY ON THE USE OF ETHEREUM BLOCKCHAIN TECHNOLOGY AND SMART CONTRACTS FOR SECURE DECENTRALIZED BANK APPLICATION,

    S. SHERIFI, S. ISMAILI, F. IDRIZI, and E. RUSTEMI, “ALBANK - A CASE STUDY ON THE USE OF ETHEREUM BLOCKCHAIN TECHNOLOGY AND SMART CONTRACTS FOR SECURE DECENTRALIZED BANK APPLICATION,” J. Nat. Sci. Math. UT, vol. 10, no. 19–20, pp. 380–400, Dec. 2025

  4. [4]

    Intelligent Traffic Monitoring with YOLOv11: A Case Study in Real-Time Vehicle Detection,

    S. Sherifi, F. Halili, and M. Kasa-Halili, “Intelligent Traffic Monitoring with YOLOv11: A Case Study in Real-Time Vehicle Detection,” in 2025 International Conference on Computer and Applications (ICCA), Dec. 2025, pp. 1–7. doi: 10.1109/ICCA66035.2025.11430921

  5. [5]

    Identification of an Elite Wheat-Rye T1RS·1BL Translocation Line Conferring High Resistance to Powdery Mildew and Stripe Rust,

    G. Han et al., “Identification of an Elite Wheat-Rye T1RS·1BL Translocation Line Conferring High Resistance to Powdery Mildew and Stripe Rust,” Plant Dis., vol. 104, no. 11, pp. 2940–2948, Nov. 2020, doi: 10.1094/PDIS-02-20-0323-RE

  6. [6]

    Deep transfer learning model for disease identification in wheat crop,

    S. Nigam et al., “Deep transfer learning model for disease identification in wheat crop,” Ecol. Inform., vol. 75, p. 102068, Jul. 2023, doi: 10.1016/j.ecoinf.2023.102068

  7. [7]

    AI based rice leaf disease identification enhanced by Dynamic Mode Decomposition,

    S. K.m., S. V., S. K. P., and S. O.k., “AI based rice leaf disease identification enhanced by Dynamic Mode Decomposition,” Eng. Appl. Artif. Intell., vol. 120, p. 105836, Apr. 2023, doi: 10.1016/j.engappai.2023.105836

  8. [8]

    Recognition of diseases of maize crop using deep learning models,

    Md. A. Haque, S. Marwaha, C. K. Deb, S. Nigam, and A. Arora, “Recognition of diseases of maize crop using deep learning models,” Neural Comput. Appl., vol. 35, no. 10, pp. 7407–7421, Apr. 2023, doi: 10.1007/s00521-022-08003-9

  9. [9]

    Corn Leaf Diseases Diagnosis Based on K-Means Clustering and Deep Learning,

    H. Yu et al., “Corn Leaf Diseases Diagnosis Based on K-Means Clustering and Deep Learning,” IEEE Access, vol. 9, pp. 143824– 143835, 2021, doi: 10.1109/ACCESS.2021.3120379

  10. [10]

    Chemical-induced gene expression ranking and its application to pancreatic cancer drug repurposing.Patterns, 3(4):100441, 2022

    C. Jackulin and S. Murugavalli, “A comprehensive review on detection of plant disease using machine learning and deep learning approaches,” Meas. Sens., vol. 24, p. 100441, Dec. 2022, doi: 10.1016/j.measen.2022.100441

  11. [11]

    Exploration of machine learning approaches for automated crop disease detection,

    A. Singla et al., “Exploration of machine learning approaches for automated crop disease detection,” Curr. Plant Biol., vol. 40, p. 100382, Dec. 2024, doi: 10.1016/j.cpb.2024.100382

  12. [12]

    Plants Disease Identification and Classification Through Leaf Images: A Survey,

    S. Kaur, S. Pandey, and S. Goel, “Plants Disease Identification and Classification Through Leaf Images: A Survey,” Arch. Comput. Methods Eng., vol. 26, no. 2, pp. 507–530, Apr. 2019, doi: 10.1007/s11831-018-9255-6

  13. [13]

    Challenges and Issues in Plant Disease Detection Using Deep Learning:,

    P. Sahu, A. Chug, A. P. Singh, D. Singh, and R. P. Singh, “Challenges and Issues in Plant Disease Detection Using Deep Learning:,” M. Dua and A. K. Jain, Eds., IGI Global, 2021, pp. 56–

  14. [14]

    doi: 10.4018/978-1-7998-3299-7.ch004

  15. [15]

    Artificial Intelligence Review 58(3), 92 (2025)

    A. Upadhyay et al., “Deep learning and computer vision in plant disease detection: a comprehensive review of techniques, models, and trends in precision agriculture,” Artif. Intell. Rev., vol. 58, no. 3, p. 92, Jan. 2025, doi: 10.1007/s10462-024-11100-x

  16. [16]

    A dual- branch neural network for crop disease recognition by integrating frequency domain and spatial domain information,

    H. Li, L. Huang, C. Ruan, W. Huang, C. Wang, and J. Zhao, “A dual- branch neural network for crop disease recognition by integrating frequency domain and spatial domain information,” Comput. Electron. Agric., vol. 219, p. 108843, Apr. 2024, doi: 10.1016/j.compag.2024.108843

  17. [17]

    Technological support for detection and prediction of plant diseases: A systematic mapping study,

    V. Bischoff, K. Farias, J. P. Menzen, and G. Pessin, “Technological support for detection and prediction of plant diseases: A systematic mapping study,” Comput. Electron. Agric., vol. 181, p. 105922, Feb. 2021, doi: 10.1016/j.compag.2020.105922

  18. [18]

    A novel deep learning method for maize disease identification based on small sample-size and complex background datasets,

    E. Li, L. Wang, Q. Xie, R. Gao, Z. Su, and Y. Li, “A novel deep learning method for maize disease identification based on small sample-size and complex background datasets,” Ecol. Inform., vol. 75, p. 102011, Jul. 2023, doi: 10.1016/j.ecoinf.2023.102011

  19. [19]

    Grape Disease Detection Using Transformer-Based Integration of Vision and Environmental Sensing,

    W. Li et al., “Grape Disease Detection Using Transformer-Based Integration of Vision and Environmental Sensing,” Agronomy, vol. 15, no. 4, p. 831, Apr. 2025, doi: 10.3390/agronomy15040831

  20. [20]

    EvoPruneDeepTL: An evolutionary pruning model for transfer learning based deep neural networks,

    J. Poyatos, D. Molina, A. D. Martinez, J. Del Ser, and F. Herrera, “EvoPruneDeepTL: An evolutionary pruning model for transfer learning based deep neural networks,” Neural Netw., vol. 158, pp. 59–82, Jan. 2023, doi: 10.1016/j.neunet.2022.10.011

  21. [21]

    Cartesian vs. Radial – A Comparative Evaluation of Two Visualization Tools

    “A Novel Approach to Detect Plant Disease Using DenseNet-121 Neural Network | springerprofessional.de.” Accessed: Jan. 22, 2026. [Online]. Available: https://link.springer.com/chapter/10.1007/978- 981-16-9967-2_7

  22. [22]

    Identification of Apple Leaf Diseases Based on Deep Convolutional Neural Networks,

    B. Liu, Y. Zhang, D. He, and Y. Li, “Identification of Apple Leaf Diseases Based on Deep Convolutional Neural Networks,” Symmetry, vol. 10, no. 1, p. 11, Jan. 2018, doi: 10.3390/sym10010011

  23. [23]

    Optimizing Pretrained Convolutional Neural Networks for Tomato Leaf Disease Detection,

    I. Ahmad, M. Hamid, S. Yousaf, S. T. Shah, and M. O. Ahmad, “Optimizing Pretrained Convolutional Neural Networks for Tomato Leaf Disease Detection,” Complexity, vol. 2020, pp. 1–6, Sep. 2020, doi: 10.1155/2020/8812019

  24. [24]

    Crop leaf disease recognition based on Self- Attention convolutional neural network,

    W. Zeng and M. Li, “Crop leaf disease recognition based on Self- Attention convolutional neural network,” Comput. Electron. Agric., vol. 172, p. 105341, May 2020, doi: 10.1016/j.compag.2020.105341

  25. [25]

    Revolutionizing agriculture with artificial intelligence: plant disease detection methods, applications, and their limitations,

    A. Jafar, N. Bibi, R. A. Naqvi, A. Sadeghi-Niaraki, and D. Jeong, “Revolutionizing agriculture with artificial intelligence: plant disease detection methods, applications, and their limitations,” Front. Plant Sci., vol. 15, Mar. 2024, doi: 10.3389/fpls.2024.1356260

  26. [26]

    Plant Leaf Diseases Detection and Classification Using Image Processing and Deep Learning Techniques,

    M. A. Jasim and J. M. AL-Tuwaijari, “Plant Leaf Diseases Detection and Classification Using Image Processing and Deep Learning Techniques,” 2020 Int. Conf. Comput. Sci. Softw. Eng. CSASE, pp. 259–265, Apr. 2020, doi: 10.1109/CSASE48920.2020.9142097

  27. [27]

    Monitoring Tomato Leaf Disease through Convolutional Neural Networks,

    A. Guerrero-Ibañez and A. Reyes-Muñoz, “Monitoring Tomato Leaf Disease through Convolutional Neural Networks,” Electronics, vol. 12, no. 1, p. 229, Jan. 2023, doi: 10.3390/electronics12010229

  28. [28]

    A systematic analysis of machine learning and deep learning based approaches for identifying and diagnosing plant diseases,

    I. Ahmed and P. K. Yadav, “A systematic analysis of machine learning and deep learning based approaches for identifying and diagnosing plant diseases,” Sustain. Oper. Comput., vol. 4, pp. 96– 104, Jan. 2023, doi: 10.1016/j.susoc.2023.03.001

  29. [29]

    Deep Transfer Learning Technique for Multimodal Disease Classification in Plant Images - Balaji - 2023 - Contrast Media & Molecular Imaging - Wiley Online Library

    “Deep Transfer Learning Technique for Multimodal Disease Classification in Plant Images - Balaji - 2023 - Contrast Media & Molecular Imaging - Wiley Online Library.” Accessed: Jan. 22,

  30. [30]

    Available: https://onlinelibrary.wiley.com/doi/10.1155/2023/5644727

    [Online]. Available: https://onlinelibrary.wiley.com/doi/10.1155/2023/5644727

  31. [31]

    Plants Diseases Prediction Framework: A Image-Based System Using Deep Learning,

    M. Kirola, K. Joshi, S. Chaudhary, N. Singh, H. Anandaram, and A. Gupta, “Plants Diseases Prediction Framework: A Image-Based System Using Deep Learning,” in 2022 IEEE World Conference on Applied Intelligence and Computing (AIC), Jun. 2022, pp. 307–313. doi: 10.1109/AIC55036.2022.9848899

  32. [32]

    Transfer Learning for Rice Leaf Disease Detection,

    S. C. Gopi and H. Kishan Kondaveeti, “Transfer Learning for Rice Leaf Disease Detection,” in 2023 Third International Conference on Artificial Intelligence and Smart Energy (ICAIS), Feb. 2023, pp. 509–

  33. [33]

    doi: 10.1109/ICAIS56108.2023.10073711

  34. [34]

    Leaf disease identification and classification using optimized deep learning,

    Y. M. Abd Algani, O. J. Marquez Caro, L. M. Robladillo Bravo, C. Kaur, M. S. Al Ansari, and B. Kiran Bala, “Leaf disease identification and classification using optimized deep learning,” Meas. Sens., vol. 25, p. 100643, Feb. 2023, doi: 10.1016/j.measen.2022.100643

  35. [35]

    PPLC-Net:Neural network- based plant disease identification model supported by weather data augmentation and multi-level attention mechanism,

    G. Dai, J. Fan, Z. Tian, and C. Wang, “PPLC-Net:Neural network- based plant disease identification model supported by weather data augmentation and multi-level attention mechanism,” J. King Saud Univ. - Comput. Inf. Sci., vol. 35, no. 5, p. 101555, May 2023, doi: 10.1016/j.jksuci.2023.101555

  36. [36]

    Tomato Leaf Disease Detection and Classification Using Cnn | Mathematical Statistician and Engineering Applications

    “Tomato Leaf Disease Detection and Classification Using Cnn | Mathematical Statistician and Engineering Applications.” Accessed: Jan. 22, 2026. [Online]. Available: https://www.philstat.org/index.php/MSEA/article/view/853

  37. [37]

    Early Ginger Disease Detection Using Deep Learning Approach,

    M. G. Yigezu, M. M. Woldeyohannis, and A. L. Tonja, “Early Ginger Disease Detection Using Deep Learning Approach,” M. L. Berihun, Ed., in Lecture Notes of the Institute for Computer Sciences, Social Informatics and Telecommunications Engineering, vol. 411. Cham: Springer International Publishing, 2022, pp. 480–488. doi: 10.1007/978-3-030-93709-6_32

  38. [38]

    Performance analysis of AI- based solutions for crop disease identification, detection, and classification,

    D. Tirkey, K. K. Singh, and S. Tripathi, “Performance analysis of AI- based solutions for crop disease identification, detection, and classification,” Smart Agric. Technol., vol. 5, p. 100238, Oct. 2023, doi: 10.1016/j.atech.2023.100238

  39. [39]

    Artificial Intelligence-Based Robust Hybrid Algorithm Design and Implementation for Real-Time Detection of Plant Diseases in Agricultural Environments,

    İ. Yağ and A. Altan, “Artificial Intelligence-Based Robust Hybrid Algorithm Design and Implementation for Real-Time Detection of Plant Diseases in Agricultural Environments,” Biology, vol. 11, no. 12, p. 1732, Dec. 2022, doi: 10.3390/biology11121732

  40. [40]

    Performance of deep learning vs machine learning in plant leaf disease detection,

    R. Sujatha, J. M. Chatterjee, N. Jhanjhi, and S. N. Brohi, “Performance of deep learning vs machine learning in plant leaf disease detection,” Microprocess. Microsyst., vol. 80, p. 103615, Feb. 2021, doi: 10.1016/j.micpro.2020.103615

  41. [41]

    A comparative study of fine-tuning deep learning models for plant disease identification,

    E. C. Too, L. Yujian, S. Njuki, and L. Yingchun, “A comparative study of fine-tuning deep learning models for plant disease identification,” Comput. Electron. Agric., vol. 161, pp. 272–279, Jun. 2019, doi: 10.1016/j.compag.2018.03.032

  42. [42]

    Plant Disease Detection Using Deep Convolutional Neural Network,

    J. A. Pandian, V. D. Kumar, O. Geman, M. Hnatiuc, M. Arif, and K. Kanchanadevi, “Plant Disease Detection Using Deep Convolutional Neural Network,” Appl. Sci., vol. 12, no. 14, p. 6982, Jan. 2022, doi: 10.3390/app12146982

  43. [43]

    Using deep transfer learning for image-based plant disease identification,

    J. Chen, J. Chen, D. Zhang, Y. Sun, and Y. A. Nanehkaran, “Using deep transfer learning for image-based plant disease identification,” Comput. Electron. Agric., vol. 173, p. 105393, Jun. 2020, doi: 10.1016/j.compag.2020.105393

  44. [44]

    Performance analysis of deep learning CNN models for disease detection in plants using image segmentation,

    P. Sharma, Y. P. S. Berwal, and W. Ghai, “Performance analysis of deep learning CNN models for disease detection in plants using image segmentation,” Inf. Process. Agric., vol. 7, no. 4, pp. 566–574, Dec. 2020, doi: 10.1016/j.inpa.2019.11.001

  45. [45]

    Convolutional Neural Networks in Detection of Plant Leaf Diseases: A Review,

    B. Tugrul, E. Elfatimi, and R. Eryigit, “Convolutional Neural Networks in Detection of Plant Leaf Diseases: A Review,” Agriculture, vol. 12, no. 8, p. 1192, Aug. 2022, doi: 10.3390/agriculture12081192

  46. [46]

    Identification of Plant-Leaf Diseases Using CNN and Transfer-Learning Approach,

    S. M. Hassan, A. K. Maji, M. Jasiński, Z. Leonowicz, and E. Jasińska, “Identification of Plant-Leaf Diseases Using CNN and Transfer-Learning Approach,” Electronics, vol. 10, no. 12, p. 1388, Jan. 2021, doi: 10.3390/electronics10121388

  47. [47]

    Deep learning models for plant disease detection and diagnosis,

    K. P. Ferentinos, “Deep learning models for plant disease detection and diagnosis,” Comput. Electron. Agric., vol. 145, pp. 311–318, Feb. 2018, doi: 10.1016/j.compag.2018.01.009

  48. [48]

    Using transfer learning-based plant disease classification and detection for sustainable agriculture,

    W. Shafik, A. Tufail, C. De Silva Liyanage, and R. A. A. H. M. Apong, “Using transfer learning-based plant disease classification and detection for sustainable agriculture,” BMC Plant Biol., vol. 24, no. 1, p. 136, Feb. 2024, doi: 10.1186/s12870-024-04825-y

  49. [49]

    Traditional and current-prospective methods of agricultural plant diseases detection: A review,

    A. Khakimov, I. Salakhutdinov, A. Omolikov, and S. Utaganov, “Traditional and current-prospective methods of agricultural plant diseases detection: A review,” IOP Conf. Ser. Earth Environ. Sci., vol. 951, no. 1, p. 012002, Jan. 2022, doi: 10.1088/1755- 1315/951/1/012002

  50. [50]

    New Plant Diseases Dataset

    “New Plant Diseases Dataset.” Accessed: Jan. 13, 2026. [Online]. Available: https://www.kaggle.com/datasets/vipoooool/new-plant- diseases-dataset

  51. [51]

    Plant leaf disease detection using computer vision and machine learning algorithms,

    S. S. Harakannanavar, J. M. Rudagi, V. I. Puranikmath, A. Siddiqua, and R. Pramodhini, “Plant leaf disease detection using computer vision and machine learning algorithms,” Glob. Transit. Proc., vol. 3, no. 1, pp. 305–310, Jun. 2022, doi: 10.1016/j.gltp.2022.03.016. [49] “shkelqimsherifi/AI_DeepLearning_CNN_Model_Plant_Diseas e_Detaction,” GitHub. Accesse...

  52. [52]

    Bovier and F

    M. D. Zeiler and R. Fergus, “Visualizing and Understanding Convolutional Networks,” in Computer Vision – ECCV 2014, D. Fleet, T. Pajdla, B. Schiele, and T. Tuytelaars, Eds., Cham: Springer International Publishing, 2014, pp. 818–833. doi: 10.1007/978-3-319- 10590-1_53

  53. [53]

    A Decade of You Only Look Once (YOLO) for Object Detection: A Review,

    L. T. Ramos and A. D. Sappa, “A Decade of You Only Look Once (YOLO) for Object Detection: A Review,” Aug. 03, 2025, arXiv: arXiv:2504.18586. doi: 10.48550/arXiv.2504.18586

  54. [54]

    Models and pre-trained weights — Torchvision main documentation

    “Models and pre-trained weights — Torchvision main documentation.” Accessed: Feb. 07, 2026. [Online]. Available: https://docs.pytorch.org/vision/main/models.html

  55. [55]

    and Hughes, David P

    S. P. Mohanty, D. P. Hughes, and M. Salathé, “Using Deep Learning for Image-Based Plant Disease Detection,” Front. Plant Sci., vol. 7, p. 1419, Sep. 2016, doi: 10.3389/fpls.2016.01419

  56. [56]

    A High Performance Wheat Disease Detection Based on Position Information,

    S. Cheng et al., “A High Performance Wheat Disease Detection Based on Position Information,” Plants, vol. 12, no. 5, Mar. 2023, doi: 10.3390/plants12051191

  57. [57]

    Plant Leaf Disease Detection, Classification, and Diagnosis Using Computer Vision and Artificial Intelligence: A Review,

    A. Bhargava, A. Shukla, O. P. Goswami, M. H. Alsharif, P. Uthansakul, and M. Uthansakul, “Plant Leaf Disease Detection, Classification, and Diagnosis Using Computer Vision and Artificial Intelligence: A Review,” IEEE Access, vol. 12, pp. 37443–37469, 2024, doi: 10.1109/ACCESS.2024.3373001

  58. [58]

    Plant disease detection and classification techniques: a comparative study of the performances,

    W. B. Demilie, “Plant disease detection and classification techniques: a comparative study of the performances,” J. Big Data, vol. 11, no. 1, p. 5, Jan. 2024, doi: 10.1186/s40537-023-00863-9

  59. [59]

    Solving Current Limitations of Deep Learning Based Approaches for Plant Disease Detection,

    M. Arsenovic, M. Karanovic, S. Sladojevic, A. Anderla, and D. Stefanovic, “Solving Current Limitations of Deep Learning Based Approaches for Plant Disease Detection,” Symmetry, vol. 11, no. 7, p. 939, Jul. 2019, doi: 10.3390/sym11070939

  60. [60]

    Very Deep Convolutional Networks for Large-Scale Image Recognition

    K. Simonyan and A. Zisserman, “Very Deep Convolutional Networks for Large-Scale Image Recognition,” Apr. 10, 2015, arXiv: arXiv:1409.1556. doi: 10.48550/arXiv.1409.1556

  61. [61]

    Matus Telgarsky

    C. Szegedy, V. Vanhoucke, S. Ioffe, J. Shlens, and Z. Wojna, “Rethinking the Inception Architecture for Computer Vision,” in 2016 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), Jun. 2016, pp. 2818–2826. doi: 10.1109/CVPR.2016.308

  62. [62]

    Plant Disease Detection in Imbalanced Datasets Using Efficient Convolutional Neural Networks With Stepwise Transfer Learning,

    M. Ahmad, M. Abdullah, H. Moon, and D. Han, “Plant Disease Detection in Imbalanced Datasets Using Efficient Convolutional Neural Networks With Stepwise Transfer Learning,” IEEE Access, vol. 9, pp. 140565–140580, 2021, doi: 10.1109/ACCESS.2021.3119655

  63. [63]

    Plant Disease Classification and Adversarial Attack Using SimAM- EfficientNet and GP-MI-FGSM

    “Plant Disease Classification and Adversarial Attack Using SimAM- EfficientNet and GP-MI-FGSM.” Accessed: Jan. 22, 2026. [Online]. Available: https://www.mdpi.com/2071-1050/15/2/1233

  64. [64]

    MobileNets: Efficient Convolutional Neural Networks for Mobile Vision Applications

    A. G. Howard et al., “MobileNets: Efficient Convolutional Neural Networks for Mobile Vision Applications,” Apr. 17, 2017, arXiv: arXiv:1704.04861. doi: 10.48550/arXiv.1704.04861

  65. [65]

    ImageNet Classification with Deep Convolutional Neural Networks,

    A. Krizhevsky, I. Sutskever, and G. E. Hinton, “ImageNet Classification with Deep Convolutional Neural Networks,” in Advances in Neural Information Processing Systems, Curran Associates, Inc., 2012. Accessed: Jan. 22, 2026. [Online]. Available: https://proceedings.neurips.cc/paper_files/paper/2012/hash/c399862d 3b9d6b76c8436e924a68c45b-Abstract.html

  66. [66]

    Early Detection of Plant Viral Disease Using Hyperspectral Imaging and Deep Learning

    “Early Detection of Plant Viral Disease Using Hyperspectral Imaging and Deep Learning.” Accessed: Jan. 22, 2026. [Online]. Available: https://www.mdpi.com/1424-8220/21/3/742

  67. [67]

    Deep Residual Learning for Image Recognition , isbn =

    K. He, X. Zhang, S. Ren, and J. Sun, “Deep Residual Learning for Image Recognition,” in 2016 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), Las Vegas, NV, USA: IEEE, Jun. 2016, pp. 770–778. doi: 10.1109/CVPR.2016.90

  68. [68]

    Computer Vision, IoT and Data Fusion for Crop Disease Detection Using Machine Learning: A Survey and Ongoing Research,

    M. Ouhami, A. Hafiane, Y. Es-Saady, M. El Hajji, and R. Canals, “Computer Vision, IoT and Data Fusion for Crop Disease Detection Using Machine Learning: A Survey and Ongoing Research,” Remote Sens., vol. 13, no. 13, p. 2486, Jan. 2021, doi: 10.3390/rs13132486