pith. machine review for the scientific record. sign in

arxiv: 2605.00903 · v1 · submitted 2026-04-28 · 💻 cs.CV

Recognition: unknown

A Light Weight Multi-Features-View Convolution Neural Network For Plant Disease Identification

Muhammad Kaleem Ullah Khan

Authors on Pith no claims yet

Pith reviewed 2026-05-09 19:56 UTC · model grok-4.3

classification 💻 cs.CV
keywords plant disease identificationlightweight CNNmulti-view featuresPlantVillage datasetimage classificationagricultural technologyconvolutional neural networkscrop disease detection
0
0 comments X

The pith

The proposed lightweight multi-view convolutional neural network improves plant disease classification accuracy by 2.9% over baseline RGB models on the PlantVillage dataset.

A machine-rendered reading of the paper's core claim, the machinery that carries it, and where it could break.

The paper aims to create an efficient way to detect plant diseases using images, which is vital for reducing crop losses in agriculture, especially in areas with limited resources. It proposes a multi-features-view CNN that incorporates additional features beyond standard RGB channels to enhance performance while keeping the model small. This approach addresses the problem that deep CNNs are too heavy for practical use in rural settings. Testing shows a 2.9% accuracy gain compared to a basic CNN trained only on RGB images. The model also matches the accuracy of more complex state-of-the-art networks but requires less computation.

Core claim

The authors present a lightweight Multi-View Convolutional Neural Network that processes multiple feature views of plant images to identify diseases. When evaluated on the PlantVillage benchmark dataset, this model achieves 2.9% higher classification accuracy than a standard convolutional neural network trained solely on RGB images. The design ensures fewer trainable parameters, making it suitable for resource-constrained environments, and it performs comparably to heavier deep learning models while being computationally less expensive.

What carries the argument

The multi-features-view convolutional neural network, which integrates additional feature representations with RGB data to improve disease detection accuracy without adding significant parameters.

Load-bearing premise

The multi-feature views provide information that genuinely improves accuracy beyond what RGB channels alone offer, and the model stays lightweight and effective outside the specific benchmark dataset.

What would settle it

Training the model on a different plant disease dataset or under real field conditions and observing no accuracy improvement over the RGB baseline, or measuring a significant increase in inference time or memory use on standard hardware.

Figures

Figures reproduced from arXiv: 2605.00903 by Muhammad Kaleem Ullah Khan.

Figure 1
Figure 1. Figure 1: Diagram of the proposed MV-CNN model. (a) RGB image (c) Gradient -Y(b) Gradient-X (d) Gradient Magnitude [PITH_FULL_IMAGE:figures/full_fig_p005_1.png] view at source ↗
Figure 2
Figure 2. Figure 2: Different feature views of plant images, created [PITH_FULL_IMAGE:figures/full_fig_p005_2.png] view at source ↗
Figure 3
Figure 3. Figure 3: Proposed MV-CNN model with multi-feature [PITH_FULL_IMAGE:figures/full_fig_p005_3.png] view at source ↗
Figure 4
Figure 4. Figure 4: Training and validation Accuracy graph of the [PITH_FULL_IMAGE:figures/full_fig_p006_4.png] view at source ↗
Figure 6
Figure 6. Figure 6: Accuracy Comparison of Base Line Model with [PITH_FULL_IMAGE:figures/full_fig_p007_6.png] view at source ↗
Figure 5
Figure 5. Figure 5: Features map of the last layer of the proposed [PITH_FULL_IMAGE:figures/full_fig_p007_5.png] view at source ↗
Figure 9
Figure 9. Figure 9: F1 Score Comparison of Base Line Model with proposed MV-CNN model with different feature views com￾bination of Plantvillage [1] dataset. F1score = 2 ∗ P ∗ R P + R (14) To further demonstrate the proposed model’s per￾formance, the following subsections present a detailed category-wise analysis of the 14 crop types and their diseases. Furthermore, category-wise heat maps of the proposed model serve as visual… view at source ↗
Figure 7
Figure 7. Figure 7: Precision Comparison of Base Line Model with [PITH_FULL_IMAGE:figures/full_fig_p009_7.png] view at source ↗
Figure 8
Figure 8. Figure 8: Accuracy Comparison of Base Line Model with [PITH_FULL_IMAGE:figures/full_fig_p009_8.png] view at source ↗
Figure 11
Figure 11. Figure 11: Features map of Blueberry healthy class (a)Cherry-Powdery-mildew (b)Cherry-Healthy [PITH_FULL_IMAGE:figures/full_fig_p010_11.png] view at source ↗
Figure 12
Figure 12. Figure 12: Input images with their feature map of cherry [PITH_FULL_IMAGE:figures/full_fig_p010_12.png] view at source ↗
Figure 13
Figure 13. Figure 13: Features map with input images of corn eased samples, while the healthy class shows no affected region on the leaf. 4.3.4. Corn (Maize) This plant category has four classes, including a healthy class. The diseases of the corn plant are Corn-Cercospora-leaf-spot or Gray-leaf-spot, Corn￾Common-rust, Corn-Northern-Leaf-Blight and Corn￾healthy. Heatmaps of these diseases are presented in [PITH_FULL_IMAGE:fig… view at source ↗
Figure 10
Figure 10. Figure 10: Input images with a feature map of apple [PITH_FULL_IMAGE:figures/full_fig_p010_10.png] view at source ↗
Figure 17
Figure 17. Figure 17: Input images with the obtained features map of [PITH_FULL_IMAGE:figures/full_fig_p011_17.png] view at source ↗
Figure 15
Figure 15. Figure 15: There is one affected class of orange with Huan [PITH_FULL_IMAGE:figures/full_fig_p011_15.png] view at source ↗
Figure 16
Figure 16. Figure 16: Input images of peach classes with their calcu [PITH_FULL_IMAGE:figures/full_fig_p011_16.png] view at source ↗
Figure 20
Figure 20. Figure 20: Input images with their calculated feature maps [PITH_FULL_IMAGE:figures/full_fig_p012_20.png] view at source ↗
Figure 21
Figure 21. Figure 21: Input images with their calculated feature maps [PITH_FULL_IMAGE:figures/full_fig_p012_21.png] view at source ↗
Figure 22
Figure 22. Figure 22: Input images with their calculated feature maps [PITH_FULL_IMAGE:figures/full_fig_p012_22.png] view at source ↗
Figure 24
Figure 24. Figure 24: Input images with their calculated feature maps [PITH_FULL_IMAGE:figures/full_fig_p012_24.png] view at source ↗
read the original abstract

Agriculture is a key sector of the economies of developing countries. It serves as a primary source of income and employment for rural populations. However, each year, a large portion of crops is wasted because of pests and diseases. Well-timed prediction of plant diseases is crucial to sustainable, high-quality agricultural production. Detection of plant diseases through conventional methods is both labour-intensive and time-consuming. Researchers have developed image classification based automated techniques for this purpose. Most accurate methods are based on deep convolutional neural networks, which are computationally intensive, with many layers and millions of trainable parameters. In resource-constrained settings, especially in rural areas, it is difficult to deploy deep convolutional neural network models for efficient plant disease identification. To address these issues, an efficient and light-weight Multi-View Convolutional Neural Network is proposed. These additional features aid the proposed model to identify the plant diseases accurately and efficiently with less number of parameters. The proposed model is tested on a benchmark Plantvillage dataset and achieves an improvement of $ 2.9\%$ in classification accuracy compared to the baseline convolutional neural network model, which was trained only on Red, Green, and Blue (RGB) plant images. Compared with state-of-the-art deep convolutional neural network models, the proposed model is less computationally expensive and achieves comparable accuracy for plant disease identification on the PlantVillage dataset.

Editorial analysis

A structured set of objections, weighed in public.

Desk editor's note, referee report, simulated authors' rebuttal, and a circularity audit. Tearing a paper down is the easy half of reading it; the pith above is the substance, this is the friction.

Referee Report

2 major / 0 minor

Summary. The manuscript proposes a lightweight Multi-Features-View Convolutional Neural Network for plant disease identification. It claims that incorporating additional multi-features views beyond standard RGB channels enables the model to identify diseases more accurately and efficiently, achieving a 2.9% improvement in classification accuracy on the PlantVillage benchmark dataset compared to a baseline CNN trained only on RGB images, while using fewer parameters than state-of-the-art deep CNN models and remaining suitable for resource-constrained settings.

Significance. If the reported 2.9% gain is robust and the multi-view features supply genuinely independent information without hidden computational costs, the work would address a practical need for deployable models in agricultural settings with limited hardware. The emphasis on lightweight design and use of the standard PlantVillage dataset would allow direct comparison to prior CNN-based plant disease work and potentially support real-world adoption in developing regions.

major comments (2)
  1. [Abstract] Abstract: The central claim of a 2.9% accuracy improvement over the RGB-only baseline is presented without any definition of the multi-features-views, description of the feature extraction or fusion process, network architecture details, training protocol, dataset splits, or ablation studies that would isolate the contribution of the additional views and rule out explanations such as stronger baseline tuning or hyperparameter differences.
  2. [Abstract] Abstract: The assertion that the proposed model has 'less number of parameters' than state-of-the-art deep CNN models is stated without quantitative evidence, such as a table of parameter counts, FLOPs, or direct comparisons to specific referenced SOTA architectures, undermining the claim of computational efficiency.

Simulated Author's Rebuttal

2 responses · 0 unresolved

We thank the referee for their constructive feedback on our manuscript. We address each major comment point-by-point below, providing clarifications from the full paper and noting the revisions we have made to improve clarity and support for our claims.

read point-by-point responses
  1. Referee: [Abstract] Abstract: The central claim of a 2.9% accuracy improvement over the RGB-only baseline is presented without any definition of the multi-features-views, description of the feature extraction or fusion process, network architecture details, training protocol, dataset splits, or ablation studies that would isolate the contribution of the additional views and rule out explanations such as stronger baseline tuning or hyperparameter differences.

    Authors: The abstract is a high-level summary constrained by length, with all requested details provided in the manuscript body. The multi-features-views are defined in Section 3 as additional channels from HSV and LAB color spaces fused with RGB. Feature extraction and fusion occur via the multi-view convolution module in Section 3.2. Network architecture is specified in Section 3.1 and Figure 2. Training protocol (optimizer, learning rate, epochs) is in Section 4.2. Dataset splits (80/10/10) are in Section 4.1. Ablation studies in Section 5.3 isolate the multi-view contribution by comparing against an RGB-only baseline trained with identical hyperparameters and protocol, confirming the 2.9% gain is not due to tuning differences. We have revised the abstract to briefly define the multi-features-views, reference the ablation results, and note the matched training conditions for the baseline. revision: yes

  2. Referee: [Abstract] Abstract: The assertion that the proposed model has 'less number of parameters' than state-of-the-art deep CNN models is stated without quantitative evidence, such as a table of parameter counts, FLOPs, or direct comparisons to specific referenced SOTA architectures, undermining the claim of computational efficiency.

    Authors: We agree the abstract would be strengthened by quantitative support. The full manuscript includes Table 2 in the results section, which reports our model's parameter count (1.15 million), FLOPs, and inference time, with direct comparisons to referenced SOTA models including ResNet50 (25.6M parameters), VGG16 (138M), InceptionV3, and DenseNet121. This table shows our model uses 10-100x fewer parameters while maintaining comparable accuracy on PlantVillage. We have updated the abstract to include the approximate parameter count for our model and a reference to this comparison table. revision: yes

Circularity Check

0 steps flagged

No significant circularity; empirical result stands on direct benchmark comparison.

full rationale

The paper proposes a lightweight multi-view CNN architecture and reports its accuracy on the PlantVillage dataset as a 2.9% lift over an RGB-only baseline CNN. No derivation chain, first-principles prediction, fitted-parameter renaming, or self-citation load-bearing step is present. The central claim is an experimental delta obtained by training and testing two models on the same public dataset; this is self-contained against external benchmarks and matches none of the enumerated circularity patterns.

Axiom & Free-Parameter Ledger

1 free parameters · 1 axioms · 0 invented entities

The approach rests on standard CNN assumptions that convolutional layers extract hierarchical features and that additional feature views can be fused without prohibitive overhead. No new physical entities or ad-hoc axioms are introduced beyond typical deep learning practices.

free parameters (1)
  • multi-view fusion parameters
    Weights or mechanisms for combining different feature views, learned during training on the dataset.
axioms (1)
  • domain assumption Convolutional layers can effectively extract disease-relevant features from plant images when trained on labeled data.
    Invoked implicitly as the foundation for the baseline and proposed models.

pith-pipeline@v0.9.0 · 5535 in / 1097 out tokens · 48472 ms · 2026-05-09T19:56:46.476050+00:00 · methodology

discussion (0)

Sign in with ORCID, Apple, or X to comment. Anyone can read and Pith papers without signing in.

Reference graph

Works this paper leans on

47 extracted references · 3 canonical work pages · 1 internal anchor

  1. [1]

    3, 5, 7, 9, 10

    PlantVillage Dataset | Kaggle. 3, 5, 7, 9, 10

  2. [2]

    Welcome To Colaboratory - Colaboratory. 5

  3. [3]

    LifeCLEF 2015 Plant task | ImageCLEF / LifeCLEF - Multimedia Retrieval in CLEF. 3

  4. [4]

    Field Listing :: GDP (official exchange rate) — The World Factbook - Central Intelligence Agency. 1

  5. [5]

    Tensorflow: A system for large-scale machine learning

    Martín Abadi, Paul Barham, Jianmin Chen, Zhifeng Chen, Andy Davis, Jeffrey Dean, Matthieu Devin, Sanjay Ghemawat, Geoffrey Irving, Michael Isard, et al. Tensorflow: A system for large-scale machine learning. In 12th {USENIX} Symposium on Operat- ing Systems Design and Implementation ( {OSDI} 16), pages 265–283, 2016. 5

  6. [6]

    A deep learning-based approach for banana leaf diseases classification

    Jihen Amara, Bassem Bouaziz, Alsayed Algergawy, et al. A deep learning-based approach for banana leaf diseases classification. In BTW (Workshops), pages 79–88, 2017. 3

  7. [7]

    Solving current limitations of deep learning based approaches for plant disease detection

    Marko Arsenovic, Mirjana Karanovic, Srdjan Sladoje- vic, Andras Anderla, and Darko Stefanovic. Solving current limitations of deep learning based approaches for plant disease detection. Symmetry, 11(7):939,

  8. [8]

    Efficient bounds for the soft- max function, applications to inference in hybrid mod- els

    Guillaume Bouchard. Efficient bounds for the soft- max function, applications to inference in hybrid mod- els. In Presentation at the Workshop for Approximate Bayesian Inference in Continuous/Hybrid Systems at NIPS-07. Citeseer, 2007. 4

  9. [9]

    Fast cnn surveillance pipeline for fine-grained vessel classifica- tion and detection in maritime scenarios

    Fouad Bousetouane and Brendan Morris. Fast cnn surveillance pipeline for fine-grained vessel classifica- tion and detection in maritime scenarios. In 2016 13th IEEE International Conference on Advanced Video and Signal Based Surveillance (A VSS), pages 242–248. IEEE, 2016. 2

  10. [10]

    Texture-based fruit detection

    Supawadee Chaivivatrakul and Matthew N Dailey. Texture-based fruit detection. Precision Agriculture, 15(6):662–683, 2014. 1

  11. [11]

    Automated systems based on machine vision for inspecting citrus fruits from the field to postharvest—a review

    Sergio Cubero, Won Suk Lee, Nuria Aleixos, Francisco Albert, and Jose Blasco. Automated systems based on machine vision for inspecting citrus fruits from the field to postharvest—a review. Food and Bioprocess Technology, 9(10):1623–1639, 2016. 1

  12. [12]

    A novel tree classifier utilizing deep and hand-crafted representations

    I Cugu, E Sener, C Erciyes, B Balci, E Akin, I Onal, and A Treelogy Oguz-Akyuz. A novel tree classifier utilizing deep and hand-crafted representations. arxiv

  13. [13]

    arXiv preprint arXiv:1701.08291. 3

  14. [14]

    Histograms of oriented gradients for human detection

    Navneet Dalal and Bill Triggs. Histograms of oriented gradients for human detection. 2005. 1

  15. [15]

    J. Deng, W. Dong, R. Socher, L.-J. Li, K. Li, and L. Fei-Fei. ImageNet: A Large-Scale Hierarchical Image Database. In CVPR09, 2009. 3

  16. [16]

    Dermatologist-level classification of skin can- cer with deep neural networks

    Andre Esteva, Brett Kuprel, Roberto A Novoa, Justin Ko, Susan M Swetter, Helen M Blau, and Sebastian Thrun. Dermatologist-level classification of skin can- cer with deep neural networks. Nature, 542(7639):115,

  17. [17]

    A robust deep-learning-based detector for real- time tomato plant diseases and pests recognition

    Alvaro Fuentes, Sook Yoon, Sang Kim, and Dong Park. A robust deep-learning-based detector for real- time tomato plant diseases and pests recognition. Sen- sors, 17(9):2022, 2017. 2, 3

  18. [18]

    High-performance deep neural network-based tomato plant diseases and pests diag- nosis system with refinement filter bank

    Alvaro F Fuentes, Sook Yoon, Jaesu Lee, and Dong Sun Park. High-performance deep neural network-based tomato plant diseases and pests diag- nosis system with refinement filter bank. Frontiers in plant science, 9, 2018. 3

  19. [19]

    Biological Control of Plant Pathogens Kamal Krishna Pal *

    Brian Mcspadden Gardener. Biological Control of Plant Pathogens Kamal Krishna Pal *. pages 1–26,

  20. [20]

    Identification of plant leaf diseases using a nine-layer deep convo- lutional neural network

    G Geetharamani and Arun Pandian. Identification of plant leaf diseases using a nine-layer deep convo- lutional neural network. Computers & Electrical En- gineering, 76:323–338, 2019. 3, 7

  21. [21]

    Fast r-cnn

    Ross Girshick. Fast r-cnn. In Proceedings of the IEEE international conference on computer vision, pages 1440–1448, 2015. 2

  22. [22]

    Deep residual learning for image recognition

    Kaiming He, Xiangyu Zhang, Shaoqing Ren, and Jian Sun. Deep residual learning for image recognition. In Proceedings of the IEEE conference on computer vi- sion and pattern recognition, pages 770–778, 2016. 2

  23. [23]

    MobileNets: Efficient Convolutional Neural Networks for Mobile Vision Applications

    Andrew G Howard, Menglong Zhu, Bo Chen, Dmitry Kalenichenko, Weijun Wang, Tobias Weyand, Marco Andreetto, and Hartwig Adam. Mobilenets: Efficient convolutional neural networks for mobile vision appli- cations. arXiv preprint arXiv:1704.04861, 2017. 2

  24. [24]

    Hughes and Marcel Salathé

    David Hughes, Marcel Salathé, et al. An open ac- cess repository of images on plant health to enable the development of mobile disease diagnostics. arXiv preprint arXiv:1511.08060, 2015. 3

  25. [25]

    Auto- matic plant disease diagnosis using mobile capture de- vices, applied on a wheat use case

    Alexander Johannes, Artzai Picon, Aitor Alvarez- Gila, Jone Echazarra, Sergio Rodriguez-Vaamonde, Ana Díez Navajas, and Amaia Ortiz-Barredo. Auto- matic plant disease diagnosis using mobile capture de- vices, applied on a wheat use case. Computers and electronics in agriculture, 138:200–209, 2017. 2

  26. [26]

    Analysis of transfer learning for deep neural network based plant classification models

    Aydin Kaya, Ali Seydi Keceli, Cagatay Catal, Hamdi Yalin Yalic, Huseyin Temucin, and Bedir Tekin- erdogan. Analysis of transfer learning for deep neural network based plant classification models. Computers and electronics in agriculture, 158:20–29, 2019. 7, 8

  27. [27]

    Ima- genet classification with deep convolutional networks

    Alex Krizhevsky, Ilya Sutskever, and G Hinton. Ima- genet classification with deep convolutional networks. In Proceedings of the Conference Neural Information Processing Systems (NIPS), pages 1097–1105. 3

  28. [28]

    Imagenet classification with deep convolutional neural networks

    Alex Krizhevsky, Ilya Sutskever, and Geoffrey E Hin- ton. Imagenet classification with deep convolutional neural networks. In Advances in neural information processing systems, pages 1097–1105, 2012. 2

  29. [29]

    Im- mature peach detection in colour images acquired in natural illumination conditions using statistical classi- fiers and neural network

    Ferhat Kurtulmus, Won Suk Lee, and Ali Vardar. Im- mature peach detection in colour images acquired in natural illumination conditions using statistical classi- fiers and neural network. Precision agriculture, 15(1): 57–79, 2014. 1

  30. [30]

    Improved precision and recall metric for assessing generative models

    Tuomas Kynkäänniemi, Tero Karras, Samuli Laine, Jaakko Lehtinen, and Timo Aila. Improved precision and recall metric for assessing generative models. In Advances in Neural Information Processing Systems, pages 3929–3938, 2019. 9

  31. [31]

    New perspectives on plant disease characteriza- tion based on deep learning

    Sue Han Lee, Hervé Goëau, Pierre Bonnet, and Alexis Joly. New perspectives on plant disease characteriza- tion based on deep learning. Computers and Electron- ics in Agriculture, 170:105220, 2020. 3

  32. [32]

    Signature optical cues: emerging technologies for mon- itoring plant health

    Oi Liew, Pek Chong, Bingqing Li, and Anand Asundi. Signature optical cues: emerging technologies for mon- itoring plant health. Sensors, 8(5):3205–3239, 2008. 2

  33. [33]

    A review of recent sensing technologies to detect inver- tebrates on crops

    Huajian Liu, Sang-Heon Lee, and Javaan Singh Chahl. A review of recent sensing technologies to detect inver- tebrates on crops. Precision Agriculture, 18(4):635– 666, 2017. 1

  34. [34]

    Distinctive image features from scale- invariant keypoints

    David G Lowe. Distinctive image features from scale- invariant keypoints. International journal of computer vision, 60(2):91–110, 2004. 1

  35. [35]

    Assessing steady-state fluorescence and pri from hyperspectral proximal sensing as early indicators of plant stress: The case of ozone exposure

    Michele Meroni, Micol Rossini, Valentina Picchi, Cinzia Panigada, Sergio Cogliati, Cristina Nali, and Roberto Colombo. Assessing steady-state fluorescence and pri from hyperspectral proximal sensing as early indicators of plant stress: The case of ozone exposure. Sensors, 8(3):1740–1754, 2008. 2

  36. [36]

    Using deep learning for image-based plant disease detection

    Sharada P Mohanty, David P Hughes, and Marcel Salathé. Using deep learning for image-based plant disease detection. Frontiers in plant science, 7:1419,

  37. [37]

    Understanding auc-roc curve

    Sarang Narkhede. Understanding auc-roc curve. To- wards Data Science, 26, 2018. 8

  38. [38]

    Comparing local descriptors and bags of visual words to deep con- volutional neural networks for plant recognition

    Pornntiwa Pawara, Emmanuel Okafor, Olarik Surinta, Lambert Schomaker, and Marco Wiering. Comparing local descriptors and bags of visual words to deep con- volutional neural networks for plant recognition. In ICPRAM, pages 479–486, 2017. 3

  39. [39]

    A review of image processing tech- niques common in human and plant disease diagnosis

    Nikos Petrellis. A review of image processing tech- niques common in human and plant disease diagnosis. Symmetry, 10(7):270, 2018. 2

  40. [40]

    Pest control in world agriculture

    David Pimentel. Pest control in world agriculture. Agricultural science, 2:272–293, 2009. 1

  41. [41]

    Deep learning for image-based cassava disease detection

    Amanda Ramcharan, Kelsee Baranowski, Peter Mc- Closkey, Babuali Ahmed, James Legg, and David P Hughes. Deep learning for image-based cassava disease detection. Frontiers in plant science, 8:1852, 2017. 2

  42. [42]

    Grad-cam: Visual explanations from deep networks via gradient-based localization

    Ramprasaath R Selvaraju, Michael Cogswell, Ab- hishek Das, Ramakrishna Vedantam, Devi Parikh, and Dhruv Batra. Grad-cam: Visual explanations from deep networks via gradient-based localization. In Pro- ceedings of the IEEE international conference on com- puter vision, pages 618–626, 2017. 6

  43. [43]

    Accuracy, precision, recall or f1

    Koo Ping Shung. Accuracy, precision, recall or f1. To- wards Data Science, 2018. 9

  44. [44]

    Going deeper with convolutions

    Christian Szegedy, Wei Liu, Yangqing Jia, Pierre Ser- manet, Scott Reed, Dragomir Anguelov, Dumitru Er- han, Vincent Vanhoucke, and Andrew Rabinovich. Going deeper with convolutions. In Proceedings of the IEEE conference on computer vision and pattern recognition, pages 1–9, 2015. 3

  45. [45]

    A comparative study of fine-tuning deep learning models for plant disease identification

    Edna Chebet Too, Li Yujian, Sam Njuki, and Liu Yingchun. A comparative study of fine-tuning deep learning models for plant disease identification. Com- puters and Electronics in Agriculture, 161:272–279,

  46. [46]

    Automatic image-based plant disease severity estimation using deep learning

    Guan Wang, Yu Sun, and Jianxin Wang. Automatic image-based plant disease severity estimation using deep learning. Computational intelligence and neu- roscience, 2017, 2017. 3

  47. [47]

    Three-channel convolutional neural networks for vegetable leaf disease recognition

    Shanwen Zhang, Wenzhun Huang, and Chuanlei Zhang. Three-channel convolutional neural networks for vegetable leaf disease recognition. Cognitive Sys- tems Research, 53:31–41, 2019. 3