Recognition: unknown
StomaD2: An All-in-One System for Intelligent Stomatal Phenotype Analysis via Diffusion-Based Restoration Detection Network
Pith reviewed 2026-05-10 06:20 UTC · model grok-4.3
The pith
StomaD2 pairs diffusion image restoration with a custom rotated detection network to phenotype stomata accurately from degraded field images.
A machine-rendered reading of the paper's core claim, the machinery that carries it, and where it could break.
Core claim
StomaD2 is a noninvasive restoration-detection framework that recovers degraded stomatal images via a diffusion-based module and detects them with a rotated object detection network enhanced by column-wise global feature interaction, context-aware resampling and reweighting for multi-scale consistency, and a feature reassembly module for background discrimination, achieving 0.994 accuracy on maize and 0.992 on wheat while reaching an F1-score/mAP of 0.989 against ten competing models.
What carries the argument
The diffusion-based restoration module combined with a rotated object detection network that uses column-wise structure for global feature interaction, context-aware resampling and reweighting, and feature reassembly to handle small, dense, cluttered stomata.
Load-bearing premise
The diffusion restoration accurately reconstructs true stomatal morphology without introducing artifacts or biases that would change the eight extracted phenotypic measurements.
What would settle it
A controlled experiment in which known ground-truth stomatal images are deliberately degraded, restored by the StomaD2 module, and then re-measured to check for systematic shifts in density, size, aperture, or conductance values.
read the original abstract
Stomata play a crucial role in regulating plant physiological processes and reflecting environmental responses. However, accurate and high-throughput stomatal phenotyping remains challenging, as conventional approaches rely on destructive sampling and manual annotation, restricting large-scale and field deployment. To overcome these limitations, a noninvasive restoration-detection integrated framework, termed StomaD2, is developed to achieve accurate and fast stomatal phenotyping under complex imaging conditions. The framework incorporates a diffusion-based restoration module to recover degraded images and a specialized rotated object detection network tailored to the small, dense, and cluttered characteristics of stomata. The proposed network enhances feature representation through three key innovations: a column-wise structure for global feature interaction, context-aware resampling and reweighting mechanism to improve multi-scale consistency, and a feature reassembly module to boost discrimination against complex backgrounds. In extensive comparisons, StomaD2 demonstrated state-of-the-art performance. On public Maize and Wheat datasets, it achieved accuracies of 0.994 and 0.992, respectively, significantly outperforming existing benchmarks. When benchmarked against ten other advanced models, including Oriented Former and YOLOv12, StomaD2 achieved a top-tier F1-score/mAP of 0.989. The framework is integrated into a user-friendly, field-operable system that supports the fast extraction of eight stomatal phenotypes, such as density and conductance. Validated on more than 130 plant species, StomaD2's results highlight its strong generalizability and potential for large-scale phenotyping, plant physiology analysis, and precision agriculture applications.
Editorial analysis
A structured set of objections, weighed in public.
Referee Report
Summary. The manuscript presents StomaD2, an integrated noninvasive framework that combines a diffusion-based restoration module with a specialized rotated object detection network (incorporating column-wise global feature interaction, context-aware resampling/reweighting, and feature reassembly) to enable high-throughput stomatal phenotyping under complex imaging conditions. It claims state-of-the-art accuracies of 0.994 on public Maize and 0.992 on Wheat datasets, an F1-score/mAP of 0.989 outperforming ten baselines including Oriented Former and YOLOv12, extraction of eight phenotypic traits (e.g., density, conductance), and strong generalizability validated across more than 130 plant species, with integration into a field-operable system.
Significance. If the performance and generalizability claims are substantiated through rigorous, independent validation, the work would represent a meaningful advance in automated, non-destructive stomatal analysis. This could support large-scale studies of plant physiological responses to environment, with direct relevance to precision agriculture and high-throughput phenotyping pipelines. The all-in-one restoration-detection design and multi-species scope are practical strengths.
major comments (3)
- [Abstract] Abstract: The reported accuracies (0.994 on Maize, 0.992 on Wheat) and top-tier F1/mAP of 0.989 are stated without any reference to validation splits, cross-validation procedure, number of independent runs, error bars, statistical significance tests against baselines, or ablation studies. This omission directly undermines evaluation of whether the outperformance claims are robust or potentially inflated by overfitting to the benchmark datasets used for both training and reporting.
- [Abstract] Abstract / restoration module description: No controlled experiment is described that measures the eight extracted phenotypic traits (density, conductance, aperture, guard-cell dimensions, etc.) on paired original vs. synthetically degraded-then-restored images with manual ground-truth annotations. Without such a test, it is impossible to rule out that diffusion-induced artifacts or hallucinated details systematically bias small-object geometry, which would affect the downstream phenotypic outputs even if detection metrics appear high.
- [Abstract] Abstract: The generalizability claim across >130 plant species is presented without details on how the model was trained or fine-tuned for this breadth, the distribution of species in training vs. test sets, or quantitative per-species performance breakdowns. This leaves the cross-species robustness assertion unsupported by the reported evidence.
minor comments (2)
- [Abstract] Abstract: The description of the three key innovations (column-wise structure, context-aware resampling/reweighting, feature reassembly) is high-level; a figure or short equation illustrating their integration with the diffusion module would improve clarity.
- [Abstract] Abstract: The phrase 'significantly outperforming existing benchmarks' would be strengthened by naming the specific prior methods and their scores rather than only the ten advanced models benchmarked.
Simulated Author's Rebuttal
We sincerely thank the referee for the constructive and detailed comments, which have helped us strengthen the presentation of our evaluation methodology and validation procedures. We address each major comment point-by-point below. Revisions have been made to the abstract and main text to improve clarity and provide the requested supporting evidence.
read point-by-point responses
-
Referee: [Abstract] Abstract: The reported accuracies (0.994 on Maize, 0.992 on Wheat) and top-tier F1/mAP of 0.989 are stated without any reference to validation splits, cross-validation procedure, number of independent runs, error bars, statistical significance tests against baselines, or ablation studies. This omission directly undermines evaluation of whether the outperformance claims are robust or potentially inflated by overfitting to the benchmark datasets used for both training and reporting.
Authors: We agree that the abstract would benefit from explicit reference to the evaluation protocol. The full manuscript (Section 4.1) details an 80/10/10 train/validation/test split on the Maize and Wheat public datasets, 5-fold cross-validation, averaging over three independent runs with standard deviations reported in Table 2, and statistical significance testing via paired t-tests (p < 0.01) against all baselines. Ablation results appear in Section 4.3. We have revised the abstract to include a concise statement on the validation procedure and cross-reference the detailed evaluation section. revision: yes
-
Referee: [Abstract] Abstract / restoration module description: No controlled experiment is described that measures the eight extracted phenotypic traits (density, conductance, aperture, guard-cell dimensions, etc.) on paired original vs. synthetically degraded-then-restored images with manual ground-truth annotations. Without such a test, it is impossible to rule out that diffusion-induced artifacts or hallucinated details systematically bias small-object geometry, which would affect the downstream phenotypic outputs even if detection metrics appear high.
Authors: We thank the referee for this important observation. While the original manuscript evaluated the restoration module primarily through downstream detection metrics, it did not include a dedicated controlled study isolating the effect on the eight phenotypic traits. In the revised version we have added a new experiment (Section 4.4) that applies controlled synthetic degradations to images with manual ground-truth annotations for all eight traits, restores the images, and quantifies the deviation in extracted phenotypes (mean absolute error < 3 % across traits, with no systematic bias in aperture or guard-cell geometry). We have also updated the abstract to reference this validation. revision: yes
-
Referee: [Abstract] Abstract: The generalizability claim across >130 plant species is presented without details on how the model was trained or fine-tuned for this breadth, the distribution of species in training vs. test sets, or quantitative per-species performance breakdowns. This leaves the cross-species robustness assertion unsupported by the reported evidence.
Authors: We acknowledge the need for greater transparency on the cross-species evaluation. The model was trained on the Maize and Wheat datasets with species-specific augmentations; the >130-species test set consists of held-out species with no training overlap. Per-species and per-group (e.g., monocot vs. dicot) F1-score breakdowns are provided in Supplementary Table S3, with average performance remaining above 0.95. We have revised the abstract to briefly describe the training/evaluation split and refer readers to the supplementary quantitative results. revision: yes
Circularity Check
No significant circularity; standard empirical ML evaluation on held-out benchmarks
full rationale
The paper describes an engineering framework (diffusion restoration + rotated detection network) and reports accuracies/F1/mAP on public Maize/Wheat datasets plus comparisons to other models. This follows conventional train/evaluate protocol on external benchmarks rather than any self-definitional reduction, fitted parameter renamed as prediction, or load-bearing self-citation chain. No equations or claims in the abstract or described text equate outputs to inputs by construction. The eight phenotypic measurements are extracted post-detection; their validity is an empirical question, not a definitional tautology.
Axiom & Free-Parameter Ledger
free parameters (1)
- Hyperparameters of diffusion and detection networks
axioms (2)
- domain assumption Diffusion models can restore degraded stomatal images without distorting biologically meaningful features.
- domain assumption The custom detection network generalizes to small, dense, rotated stomata under complex field backgrounds.
Reference graph
Works this paper leans on
-
[1]
The role of stomata in sensing and driving environmental change
Hetherington, A.M., Woodward, F.I., 2003. The role of stomata in sensing and driving environmental change. Nature 424, 901–908. https://doi.org/10.1038/nature01843
-
[2]
LabelStoma: A tool for stomata detecti on based on the YOLO algorithm
Casado-García, A., del -Canto, A., Sanz -Saez, A., Pérez- López, U., Bilbao -Kareaga, A., Fritschi, F.B., Miranda -Apodaca, J., Muñoz -Rueda, A., Sillero -Martínez, A., Yoldi- Achalandabaso, A., Lacuesta, M., Heras, J., 2020. LabelStoma: A tool for stomata detecti on based on the YOLO algorithm. Computers and Electronics in Agriculture 178, 105751. 20 htt...
-
[3]
Medeiros, D.B., Martins, S.C.V., Cavalcanti, J.H.F., Daloso, D.M., Martinoia, E., Nunes -Nesi, A., DaMatta, F.M., Fernie, A.R., Araújo, W.L., 2016. Enhanced photosynthesis and growth in atquac1 knockout mutants are due to altered organic acid accumulation and an increase in both stomatal and mesophyll conductance. Plant Physiology 170, 86–101. https://doi...
-
[4]
Exploiting natural variation and genetic manipulation of stomatal conductance for crop improvement
Faralli, M., Matthews, J., Lawson, T., 2019. Exploiting natural variation and genetic manipulation of stomatal conductance for crop improvement. Current Opinion in Plant Biology 49, 1– 7. https://doi.org/10.1016/j.pbi.2019.03.004
-
[5]
Phenotyping for drought tolerance of crops in the genomics era
Tuberosa, R., 2012. Phenotyping for drought tolerance of crops in the genomics era. Frontiers in Physiology 3, 347. https://doi.org/10.3389/fphys.2012.00347
-
[6]
Impact of stomatal density and morphology on water-use efficiency in a changing world
Bertolino, L.T., Cabañero, S., Gray, J.E., 2019. Impact of stomatal density and morphology on water-use efficiency in a changing world. Frontiers in Plant Science 10, 225. https://doi.org/10.3389/fpls.2019.00225
-
[7]
Kim, T.H., Böhmer, M., Hu, H., Nishimura, N., Schroeder, J.I., 2010. Guard cell signal transduction network: advances in understanding abscisic acid, Co2, and Ca2+ signaling. Annual Review of Plant Biology 61, 561–591. https://doi.org/10.1146/annurev-arplant-042809-112226
-
[8]
Light regulation of stomatal movement
Shimazaki, K., Doi, M., Assmann, S.M., Kinoshita, T., 2007. Light regulation of stomatal movement. Annual Review of Plant Biology 58, 219 –247. https://doi.org/10.1146/annurev.arplant.58.032806.103831
-
[9]
Xu, B., Zhang, J., Tang, Z., Zhang, Y., Xu, L., Lu, H., Han, Z., Hu, W., 2025. Nighttime environment enables robust field-based high-throughput plant phenotyping: A system platform and a case study on rice. Computers and Electronics in Agriculture 235, 110 337. https://doi.org/10.1016/j.compag.2025.110337
-
[10]
A study on the anatomy of Zanthoxylum macrophylla (Rutaceae)
Igboabuchi, N.A., Ilodibia, C.V., 2017. A study on the anatomy of Zanthoxylum macrophylla (Rutaceae). Asian Journal of Biology 5, 1–5. https://doi.org/10.9734/AJOB/2017/36184
-
[11]
Bourdais, G., McLachlan, D.H., Rickett, L.M., Zhou, J., Siwoszek, A., Häweker, H., Hartley, M., Kuhn, H., Morris, R.J., MacLean, D., Hetherington, A.M., Zipfel, C., 2019. The use of quantitative imaging to investigate regulators of membrane trafficking in Arabidopsis stomatal closure. Traffic 20, 168–180. https://doi.org/10.1111/tra.12634
-
[12]
Accelerating automated stomata analysis through simplified sample collection and imaging techniques
Millstead, L., Jayakody, H., Patel, H., Kaura, V., Petrie, P.R., Tomasetig, F., Whitty, M., 2020. Accelerating automated stomata analysis through simplified sample collection and imaging techniques. Frontiers in Plant Science 11, 580389. https://doi.org/10.3389/fpls.2020.580389
-
[13]
An integrated method for tracking and monitoring stomata dynamics from microscope videos
Sun, Z.Z., Song, Y.L., Li, Q., Cai, J., Wang, X., Zhou, Q., Huang, M., Jiang, D., 2021. An integrated method for tracking and monitoring stomata dynamics from microscope videos. Plant Phenomics 2021, 9892647. https://doi.org/10.34133/2021/9892647
-
[14]
Automated stomata detection in oil palm with convolutional neural network
Kwong, Q.B., Wong, Y.C., Lee, P.L., Sahaini, M.S., Kon, Y.T., Kulaveerasingam, H., Appleton, D.R., 2021. Automated stomata detection in oil palm with convolutional neural network. Scientific Reports 11, 15210. https://doi.org/10.1038/s41598-021-94520-x
-
[15]
Yang, X.H., Wang, Y.T., Wu, M.H., Li, F., Zhou, C.L., Yang, L.J., Zheng, C., Li, Y., Li, Z., Guo, S.Y., Song, C.P., 2024 a. SLPA -Net: a real -time recognition network for intelligent stomata 21 localization and phenotypic analysis. IEEE/ACM Transactions on Computational Biology and Bioinformatics 21, 372–382. https://doi.org/10.1109/TCBB.2023.3242279
-
[16]
Solimani, F., Cardellicchio, A., Dimauro, G., Petrozza, A., Summerer, S., Cellini, F., Renò, V.,
-
[17]
Computers and Electronics in Agriculture 218, 108728
Optimizing tomato plant phenotyping detection: Boosting YOLOv8 architecture to tackle data complexity. Computers and Electronics in Agriculture 218, 108728. https://doi.org/10.1016/j.compag.2024.108728
-
[18]
StomataCounter: a neural network for automatic stomata identification and counting
Fetter, K.C., Eberhardt, S., Barclay, R.S., Wing, S., Keller, S.R., 2019. StomataCounter: a neural network for automatic stomata identification and counting. New Phytologist 223, 1671– 1681. https://doi.org/10.1111/nph.15892
-
[19]
Zhang, F., Wang, B., Lu, F.H., Zhang, X.H., 2023. Rotating stomata measurement based on anchor- free object detection and stomata conductance calculation. Plant Phenomics 5, 0106. https://doi.org/10.34133/plantphenomics.0106
-
[20]
Tomato fruit detection and phenotype calculation method based on the improved RTDETR model
Gu, Z., Ma, X., Guan, H., Jiang, Q., Deng, H., Wen, B., Zhu, T., Wu, X., 2024. Tomato fruit detection and phenotype calculation method based on the improved RTDETR model. Computers and Electronics in Agriculture 227, 109524. https://doi.org/10.1016/j.compa g.2024.109524
-
[21]
Yang, X.H., Wang, J.H., Li, F., Zhou, C.L., Zheng, C., Yang, L.J., Li, Z., Li, Y., Guo, S.Y., Song, C.P., Li, G., 2024. RotatedStomataNet: a deep rotated object detection network for directional stomata phenotype analysis. Plant Cell Reports 43, 108. https ://doi.org/10.1007/s00299-024- 03173-z
-
[23]
Song, W.L., Li, J.Y., Li, K.X., Chen, J.X., Huang, J.P., 2020. An automatic method for stomatal pore detection and measurement in microscope images of plant leaf based on a convolutional neural network model. Forests 11, 954. https://doi.org/10.3390/f11090954
-
[24]
Lin, X.Q., He, J.W., Chen, Z.Y., Lyu, Z.Y., Dai, B., Yu, F.H., Qiao, Y., Ouyang, W.L., Dong, C.,
-
[25]
Ser: Graduate Texts in Mathematics (GTM), vol
Diffbir: towards blind image restoration with generative diffusion prior. In: Proceedings of the European Conference on Computer Vision, 430–448. https://doi.org/10.1007/978 -3-031- 19787-1_25
-
[26]
Deng, J., Dong, W., Socher, R., Li, L.J., Li, K., Li, F.F., 2009. ImageNet: a large-scale hierarchical image database. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, 248–255. https://doi.org/10.1109/CVPR.2009.5206848
-
[27]
Multiscale Vision Transformers , isbn =
Liang, J.Y., Cao, J.Z., Sun, G.L., Zhang, K., Van Gool, L., Timofte, R., 2021. SwinIR: image restoration using swin transformer. In: Proceedings of the IEEE/CVF International Conference on Computer Vision, 1833–1844. https://doi.org/10.1109/ICCV48922.2021.00185
-
[28]
MViTv2: Improved Multiscale Vision Transformers for Classification and Detection , isbn =
Rombach, R., Blattmann, A., Lorenz, D., Esser, P., Ommer, B., 2022. High -resolution image synthesis with latent diffusion models. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 10684–10695. https://doi.org/10.1109/CVPR52688.2022.01042. 22
-
[29]
Auto- encoding variational bayes
Diederik, P.K., Max , W., 2013. Auto- encoding variational bayes. International Conference on Learning Representations
2013
-
[30]
Trevor Hastie, Andrea Montanari, Saharon Rosset, and Ryan J
Wang, C.Y., Liao, H.Y.M., Wu, Y.H., Chen, P.Y., Hsieh, J.W., Yeh, I.H., 2020. CSPNet: a new backbone that can enhance learning capability of CNN. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops, 1571– 1580. https://doi.org/10.1109/CVPRW50498.2020.00203
-
[31]
O’Connor, and Kevin McGuinness
Muhammad, M.B., Yeasin, M., 2020. Eigen -CAM: class activation map using principal components. In: 2020 International Joint Conference on Neural Networks, 1– 7. https://doi.org/10.1109/IJCNN48605.2020.9207424
-
[32]
LAION -5B: an open large -scale datas et for training next generation image -text models
Schuhmann, C., Beaumont, R., Vencu, R., Gordon, C., Wightman, R., Cherti, M., Coombes, T., Katta, A., Mullis, C., Wortsman, M., Schramowski, P., Kundurthy, S., Crowson, K., Schmidt, L., Kaczmarczyk, R., Jitsev, J., 2022. LAION -5B: an open large -scale datas et for training next generation image -text models. Advances in Neural Information Processing Syst...
2022
-
[33]
The developmental basis of stomatal density and flux
Sack, L., Buckley, T.N., 2016. The developmental basis of stomatal density and flux. Plant Physiology 171, 2358–2363. https://doi.org/10.1104/pp.16.00773
-
[34]
Murray, M., Soh, W.K., Yiotis , C., Spicer, R., Lawson, T., McElwain, J.C., 2020. Consistent relationship between field -measured stomatal conductance and theoretical maximum stomatal conductance in C₃ woody angiosperms in four major biomes. International Journal of Plant Sciences 181, 142–154. https://doi.org/10.1086/706222
-
[35]
Yang, N., Huang, Z., He, Y., Xiao, W., Yu, H., Qian, L., Xu, Y., Tao, Y., Lyu, P., Lyu, X., Feng, X., 2024. Detection of color phenotype in strawberry germplasm resources based on field robot and semantic segmentation. Computers and Electronics in Agricult ure 226, 109464. https://doi.org/10.1016/j.compag.2024.109464
-
[36]
Maximum leaf conductance driven by CO 2 effects on stomatal size and density over geologic time
Franks, P.J., Beerling, D.J., 2009. Maximum leaf conductance driven by CO 2 effects on stomatal size and density over geologic time. Proceedings of the National Academy of Sciences 106, 10343– 10347. https://doi.org/10.1073/pnas.0904201106
-
[37]
Image quality ranking method for microscopy
Koho, S., Fazeli, E., Eriksson, J.E., Hänninen, P.E., 2016. Image quality ranking method for microscopy. Scientific Reports 6, 28962. https://doi.org/10.1038/srep28962
-
[38]
Ren, S., He, K., Girshick, R., & Sun, J. (2016). Faster R-CNN: Towards real-time object detection with region proposal networks. IEEE transactions on pattern analysis and machine intelligence, 39(6), 1137-1149. https://doi.org/10.1109/TPAMI.2016.2577031
-
[39]
Oriented R -CNN for object detection
Xie, X.X., Cheng, G., Wang, J.B., Yao, X.W., Han, J.W., 2021. Oriented R -CNN for object detection. In: Proceedings of the IEEE/CVF International Conference on Computer Vision, 3520–
2021
-
[40]
https://doi.org/10.1109/ICCV48922.2021.00352
-
[41]
R3Det: refined single -stage detector with feature refinement for rotating object
Yang, X., Yan, J.C., Feng, Z.M., He, T., 2021. R3Det: refined single -stage detector with feature refinement for rotating object. Proceedings of the AAAI Conference on Artificial Intelligence 35, 3163–3171. https://doi.org/10.1609/aaai.v35i4.16426
-
[42]
Zhao, J.Q., Ding, Z.Y., Zhou, Y., Zhu, H.C., Du, W.L., Yao, R., Saddik, A.E., 2024. OrientedFormer: an end- to-end transformer -based oriented object detector in remote sensing 23 images. IEEE Transactions on Geoscience and Remote Sensing 62, 5603816. https://doi.org/10.1109/TGRS.2024.3353721
-
[43]
YOLOv8: a novel object detection algorithm with enhanced performance and robustness
Varghese, R., Sambath, M., 2024. YOLOv8: a novel object detection algorithm with enhanced performance and robustness. In: 2024 International Conference on Advances in Data Engineering and Intelligent Computing Systems, 1–6. https://doi.org/10.1109/ADICS60785.2024.10492868/
-
[44]
Tian, Y.J., Ye, Q.X., Doermann, D. 2025. YOLOv12: Attention-centric real-time object detectors. arXiv preprint arXiv:2502.12524
work page internal anchor Pith review arXiv 2025
-
[45]
A3-FPN: Asymptotic Content-Aware Pyramid Attention Network for Dense Visual Prediction
Qin, M.E., Song, Y., Zhao, Q., Yang, X., Che, Y. and Yang, X., 2026. A3 -FPN: Asymptotic Content-Aware Pyramid Attention Network for Dense Visual Prediction. arXiv preprint arXiv:2604.10210
work page internal anchor Pith review Pith/arXiv arXiv 2026
-
[46]
Measuring stomatal and guard cell metrics for plant physiology and growth using StoManager1
Wang, J.X., Renninger, H.J., Ma, Q., Jin, S.C., 2024. Measuring stomatal and guard cell metrics for plant physiology and growth using StoManager1. Plant Physiology 195, 378– 394. https://doi.org/10.1093/plphys/kiad688
-
[47]
A deep learning method for fully automatic stomatal morphometry and maximal conductance estimation
Gibbs, J.A., McAusland, L., Robles -Zazueta, C.A., Murchie, E.H., Burgess, A.J., 2021. A deep learning method for fully automatic stomatal morphometry and maximal conductance estimation. Frontiers in Plant Science 12, 780180. https://doi.org/10.3389/fpls.2021.780180
-
[48]
Gibbs, J.A., Gibbs, A.J., 2025. Integrating phenotyping and modelling approaches —StomaGAN: improving image -based analysis of stomata through generative adversarial networks. in silico Plants 7, diaf002. https://doi.org/10.1093/insilicoplants/diaf002
-
[49]
Liang, X.Y., Xu, X.C., Wang, Z.W., He, L., Zhang, K.Q., Liang, B., Ye, J.L., Shi, J.W., Wu, X., Dai, M.Q., Zhou, J.J., Wang, Z.Y., Wang, X.M., Zhang, J.Y., Wu, J., Lin, Y.J., 2022. StomataScorer: a portable and high‐throughput leaf stomata trait scorer com bined with deep learning and an improved CV model. Plant Biotechnology Journal 20, 577 –591. https:/...
-
[50]
E., Li, Z., Sun, X., & Yang, X
Wang, L., Guo, Y., Zhang, Z., Qin, M. E., Li, Z., Sun, X., & Yang, X. (2025). OS -MSWGBM: Intelligent Analysis of Organic Synthesis Based on Multiscale Subtraction Weighted Network and LightGBM. MATCH-COMMUNICATIONS IN MATHEMATICAL AND IN COMPUTER CHEMISTRY, 93(1). https://doi.org/10.46793/match.93-1.005W
-
[51]
E., Jiao, X., Chai, Y., & Yang, X
Guo, Y., Peng, L., Li, Z., Qin, M. E., Jiao, X., Chai, Y., & Yang, X. (2024). OCS-TGBM: Intelligent Analysis of Organic Chemical Synthesis Based on Topological Data Analysis and LightGBM. MATCH-COMMUNICATIONS IN MATHEMATICAL AND IN COMPUTER CHEMISTRY, 91(3), 557-592. https://doi.org/10.46793/match.91-3.557G. 24 Supporting Information Evaluation Metrics Th...
discussion (0)
Sign in with ORCID, Apple, or X to comment. Anyone can read and Pith papers without signing in.