Recognition: unknown
Engineering Resource-constrained Software Systems with DNN Components: a Concept-based Pruning Approach
Pith reviewed 2026-05-10 16:38 UTC · model grok-4.3
The pith
A concept-based pruning method for DNNs guided by interpretable concepts and system requirements produces smaller, computationally efficient models that maintain effectiveness on image classification tasks.
A machine-rendered reading of the paper's core claim, the machinery that carries it, and where it could break.
Core claim
Our concept-based pruning solution analyzes neuron activations to identify important neurons from a system requirements viewpoint and uses this information to guide the DNN pruning. Our results show that concept-based pruning efficiently generates much smaller, effective pruned DNNs.
Load-bearing premise
That neuron activations can be reliably mapped to human-interpretable concepts (features, colors, classes) in a way that produces pruning decisions superior to standard magnitude-based or random methods, and that this mapping remains stable across the evaluated dataset and network.
Figures
read the original abstract
Deep Neural Networks (DNNs) are widely used by engineers to solve difficult problems that require predictive modeling from data. However, these models are often massive, with millions or billions of parameters, and require substantial computational power, RAM, and storage. This becomes a limitation in practical scenarios where strict size and resource constraints must be respected. In this paper, we present a novel concept-based pruning technique for DNNs that guides pruning decisions using human-interpretable concepts, such as features, colors, and classes. This is particularly important in a software engineering context, as DNNs are integrated into systems and must be pruned according to specific system requirements. Our concept-based pruning solution analyzes neuron activations to identify important neurons from a system requirements viewpoint and uses this information to guide the DNN pruning. We assess our solution using the VGG-19 network and a dataset of 26'384 RGB images, focusing on its ability to produce small, effective pruned DNNs and on the computational complexity and performance of these pruned DNNs. We also analyzed the pruning efficiency of our solution and compared alternative configurations. Our results show that concept-based pruning efficiently generates much smaller, effective pruned DNNs. Pruning greatly improves the computational efficiency and performance of DNNs, properties that are particularly useful for practical applications with stringent memory and computational time constraints. Finally, alternative configuration options enable engineers to identify trade-offs adapted to different practical situations.
Editorial analysis
A structured set of objections, weighed in public.
Referee Report
Summary. The manuscript proposes a concept-based pruning method for DNNs in resource-constrained software systems. Neuron activations are analyzed to identify important neurons via human-interpretable concepts (features, colors, classes) aligned with system requirements; this information then guides pruning to yield smaller yet effective models. The approach is assessed on VGG-19 using a 26,384-image RGB dataset, with claims of improved computational efficiency, performance, and the ability to explore trade-offs through alternative configurations.
Significance. If the mapping from activations to requirements-aligned concepts can be shown to produce pruning decisions that are both stable and superior (or at least competitive) to standard magnitude-based or random baselines, the work would offer a practically useful contribution to software engineering of DNN components. It directly addresses the need for interpretable, system-level pruning rather than purely parameter-count-driven reduction, which is relevant for embedded or edge deployments.
major comments (2)
- [Abstract / Evaluation] Abstract and evaluation description: the central claim that concept-based pruning 'efficiently generates much smaller, effective pruned DNNs' is unsupported by any quantitative results. No accuracy values, size-reduction ratios, inference-time improvements, or statistical comparisons to magnitude-based pruning or random baselines are reported, leaving the asserted advantage over existing techniques unverifiable.
- [Approach / Concept Extraction] Method description: the process for mapping neuron activations to human-interpretable concepts (activation thresholding, clustering, labeling, or automated extraction) is not detailed. Without this, it is impossible to assess whether the 'system requirements viewpoint' is reproducible or whether the mapping is stable across the single evaluated dataset and network.
minor comments (2)
- [Abstract] Dataset size is written as 26'384; standard notation is 26,384.
- [Related Work] The manuscript should cite and briefly contrast with established DNN pruning literature (e.g., magnitude pruning, lottery-ticket hypothesis, structured pruning) to clarify the novelty of the concept-based criterion.
Simulated Author's Rebuttal
We thank the referee for the constructive feedback on our manuscript. The comments highlight important areas where clarity and detail can be improved, particularly regarding quantitative support for claims and reproducibility of the method. We address each major comment below and describe the revisions planned for the next version of the paper.
read point-by-point responses
-
Referee: [Abstract / Evaluation] Abstract and evaluation description: the central claim that concept-based pruning 'efficiently generates much smaller, effective pruned DNNs' is unsupported by any quantitative results. No accuracy values, size-reduction ratios, inference-time improvements, or statistical comparisons to magnitude-based pruning or random baselines are reported, leaving the asserted advantage over existing techniques unverifiable.
Authors: We agree that the abstract would be strengthened by including specific quantitative metrics to support the claims of efficiency and effectiveness. The evaluation section of the manuscript reports results from experiments on VGG-19 with the 26,384-image RGB dataset, including analysis of pruning efficiency and comparisons among alternative configurations of our approach. To directly address the concern, we will revise the abstract to explicitly state key results such as retained accuracy, model size reduction ratios, inference-time gains, and add direct comparisons to magnitude-based pruning and random baselines with appropriate statistical measures. This will make the advantages verifiable without altering the core findings. revision: yes
-
Referee: [Approach / Concept Extraction] Method description: the process for mapping neuron activations to human-interpretable concepts (activation thresholding, clustering, labeling, or automated extraction) is not detailed. Without this, it is impossible to assess whether the 'system requirements viewpoint' is reproducible or whether the mapping is stable across the single evaluated dataset and network.
Authors: We acknowledge that additional detail on the concept extraction process is necessary to ensure reproducibility and to allow assessment of stability. In the revised manuscript, we will expand the approach section with a precise description of the steps involved, including activation thresholding criteria, the clustering technique applied to neuron activations, how concepts (features, colors, classes) are labeled in alignment with system requirements, and any automated extraction components. We will also include pseudocode for the mapping procedure and a brief analysis of its behavior on the evaluated dataset and VGG-19 network to address stability concerns. revision: yes
Circularity Check
No significant circularity; empirical technique with external evaluation
full rationale
The paper presents a concept-based pruning method for DNNs and evaluates it experimentally on VGG-19 using a 26k-image RGB dataset, comparing alternative configurations for size, effectiveness, and efficiency. No equations, derivations, or analytical chains are described. Claims of generating smaller effective pruned DNNs rest on experimental outcomes rather than any reduction to fitted parameters, self-definitions, or self-citation chains. The work is self-contained against external benchmarks and does not invoke uniqueness theorems or rename known results as new derivations.
Axiom & Free-Parameter Ledger
Reference graph
Works this paper leans on
-
[1]
Raha Ahmadi, Mohammad Javad Rajabi, Mohammad Khalooie, and Mohammad Sabokrou. 2024. Mitigating Bias: Enhancing Image Classification by Improving Model Explanations. InAsian Conference on Machine Learning (Proceedings of Machine Learning Research, Vol. 222). PMLR, 1–14
2024
-
[2]
Alshmrani, Qiang Ni, Richard Jiang, Haris Pervaiz, and Nada M
Goram Mufarah M. Alshmrani, Qiang Ni, Richard Jiang, Haris Pervaiz, and Nada M. Elshennawy. 2023. A deep learning architecture for multi-class lung diseases classification using chest X-ray (CXR) images.Alexandria Engineering Journal64 (2023), 923–935. doi:10.1016/j.aej.2022.10.053
-
[3]
Saleema Amershi, Andrew Begel, Christian Bird, et al. 2019. Software engineer- ing for machine learning: a case study. InInternational Conference on Software Engineering: Software Engineering in Practice (ICSE-SEIP ’19). IEEE Press, 291–300
2019
-
[4]
Sajjad Amini and Shahrokh Ghaemmaghami. 2020. Towards Improving Robust- ness of Deep Neural Networks to Adversarial Perturbations.IEEE Transactions on Multimedia22, 7 (2020), 1889–1903. doi:10.1109/TMM.2020.2969784
-
[5]
Alaa Anani, Tobias Lorenz, Bernt Schiele, Mario Fritz, and Jonas Fischer
-
[6]
arXiv preprint arXiv:2602.22968 (2026) 4
Certified Circuits: Stability Guarantees for Mechanistic Circuits. arXiv:2602.22968 [cs.AI] https://arxiv.org/abs/2602.22968
-
[7]
Paolo Arcaini, Andrea Bombarda, Silvia Bonfanti, and Angelo Gargantini. 2020. Dealing with Robustness of Convolutional Neural Networks for Image Classi- fication. In2020 IEEE International Conference On Artificial Intelligence Testing (AITest). 7–14
2020
-
[8]
Paolo Arcaini, Andrea Bombarda, Silvia Bonfanti, Angelo Gargantini, Daniele Gamba, and Rita Pedercini. 2022. Robustness assessment and improvement of a neural network for blood oxygen pressure estimation . In2022 IEEE Conference on Software Testing, Verification and Validation (ICST). IEEE Computer Society, 312–322
2022
-
[9]
Integrating DNNs into Resource-Constrained Software Systems: a Concept-based Pruning Approach
Anonymous Author(s). 2026. Replication package for "Integrating DNNs into Resource-Constrained Software Systems: a Concept-based Pruning Approach". doi:10.6084/m9.figshare.31692055.v2
-
[10]
Saraswathy B. and Anita Angeline A. 2025. Dynamic precision configurable multiply and accumulate architecture for hardware accelerators.Integration103 (July 2025), 102419. doi:10.1016/j.vlsi.2025.102419
-
[11]
Dzmitry Bahdanau, Kyunghyun Cho, and Yoshua Bengio. 2015. Neural Ma- chine Translation by Jointly Learning to Align and Translate. InInternational Conference on Learning Representations (ICLR)
2015
-
[12]
Kevin Barrera-Llanga, Jordi Burriel-Valencia, Ángel Sapena-Baño, and Javier Martínez-Román. 2023. A Comparative Analysis of Deep Learning Convolutional Neural Network Architectures for Fault Diagnosis of Broken Rotor Bars in Induction Motors.Sensors23, 19 (2023). doi:10.3390/s23198196
-
[13]
Steven Beland, Isaac Chang, Alexander Chen, et al. 2020. Towards Assurance Evaluation of Autonomous Systems. In2020 IEEE/ACM International Conference On Computer Aided Design (ICCAD). 1–6
2020
-
[14]
Adithya Bhaskar, Alexander Wettig, Dan Friedman, and Danqi Chen. 2024. Find- ing Transformer Circuits With Edge Pruning. InAdvances in Neural Information Processing Systems, Vol. 37. Curran Associates, Inc., 18506–18534
2024
-
[15]
Davis Blalock, Jose Javier Gonzalez Ortiz, Jonathan Frankle, and John Guttag
-
[16]
What is the State of Neural Network Pruning? https://arxiv.org/abs/2003. 03033
2003
-
[17]
Andrea Bombarda, Giuseppe Ruscica, and Patrizia Scandurra. 2025. A self- managing IoT-Edge-Cloud architecture for improved robustness in environmen- tal monitoring. In40th ACM/SIGAPP Symposium on Applied Computing (SAC ’25). ACM, New York, NY, USA, 1738–1745
2025
-
[18]
Lang, et al
Holger Caesar, Varun Bankiti, Alex H. Lang, et al. 2019. nuScenes: A Multimodal Dataset for Autonomous Driving.2020 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR)(2019), 11618–11628
2019
-
[19]
Tim Capes, Paul Coles, Alistair Conkie, et al. 2017. Siri On-Device Deep Learning- Guided Unit Selection Text-to-Speech System. InInterspeech 2017. 4011–4015. doi:10.21437/Interspeech.2017-1798
-
[20]
Hongrong Cheng, Miao Zhang, and Javen Qinfeng Shi. 2024. A Survey on Deep Neural Network Pruning: Taxonomy, Comparison, Analysis, and Recommenda- tions.IEEE Transactions on Pattern Analysis and Machine Intelligence46, 12 (Dec. 2024), 10558–10578. doi:10.1109/tpami.2024.3447085
-
[21]
Della Vedova, et al
Rossella Damiano, Elisa Scalco, Marco L. Della Vedova, et al . 2025. Integrat- ing Uncertainty Into U-Net Robustness Evaluation Under Natural MRI Alter- ations: Application to Kidney Segmentation. InArtificial Intelligence in Medicine. Springer Nature Switzerland, Cham, 121–126
2025
-
[22]
Pierre Vilar Dantas, Waldir Sabino da Silva, Lucas Carvalho Cordeiro, and Celso Barbosa Carvalho. 2024. A comprehensive review of model compression techniques in machine learning.Applied Intelligence54, 22 (2024), 11804–11844. doi:10.1007/s10489-024-05747-w
-
[23]
Jia Deng, Wei Dong, Richard Socher, Li-Jia Li, Kai Li, and Li Fei-Fei. 2009. Im- ageNet: A large-scale hierarchical image database. In2009 IEEE Conference on Computer Vision and Pattern Recognition. 248–255
2009
-
[24]
Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. 2019. BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding. InConference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, NAACL-HLT 2019, Vol. 1. Association for Computational Linguistics, 4171–4186
2019
-
[25]
Nilanjan Dey, Yu-Dong Zhang, V. Rajinikanth, R. Pugalenthi, and N. Sri Madhava Raja. 2021. Customized VGG19 Architecture for Pneumonia Detection in Chest X-Rays.Pattern Recognition Letters143 (2021), 67–74. doi:10.1016/j.patrec.2020. 12.010
-
[26]
Alexey Dosovitskiy, Lucas Beyer, Alexander Kolesnikov, et al. 2021. An Image is Worth 16x16 Words: Transformers for Image Recognition at Scale. InInternational Conference on Learning Representations (ICLR). OpenReview.net
2021
-
[27]
Utku Evci, Trevor Gale, Jacob Menick, Pablo Samuel Castro, and Erich Elsen
-
[28]
InInternational Conference on Machine Learning (ICML’20)
Rigging the lottery: making all tickets winners. InInternational Conference on Machine Learning (ICML’20). JMLR.org, Article 276
-
[29]
Angela Fan, Beliz Gokkaya, Mark Harman, Mitya Lyubarskiy, Shubho Sengupta, Shin Yoo, and Jie M. Zhang. 2023. Large Language Models for Software Engi- neering: Survey and Open Problems. In2023 IEEE/ACM International Conference on Software Engineering: Future of Software Engineering (ICSE-FoSE). 31–53
2023
-
[30]
Gongfan Fang. 2023. Torch-Pruning. https://pypi.org/project/torch-pruning/. Python package index page, accessed 2026-03-25
2023
-
[31]
Gongfan Fang, Xinyin Ma, Michael Bi Mi, and Xinchao Wang. 2024. Isomorphic pruning for vision models. InEuropean Conference on Computer Vision. Springer, 232–250
2024
-
[32]
Gongfan Fang, Xinyin Ma, Mingli Song, Michael Bi Mi, and Xinchao Wang. 2023. Depgraph: Towards any structural pruning. InIEEE/CVF Conference on Computer Vision and Pattern Recognition. 16091–16101
2023
-
[33]
Gongfan Fang, Xinyin Ma, and Xinchao Wang. 2023. Structural Pruning for Diffusion Models. InAdvances in Neural Information Processing Systems, Vol. 36. Curran Associates, Inc., 16716–16728
2023
-
[34]
Igor Fedorov, Marko Stamenovic, Carl Jensen, et al . 2020. TinyLSTMs: Effi- cient Neural Speech Enhancement for Hearing Aids. InInterspeech 2020 (inter- speech_2020). ISCA, 4054–4058
2020
- [35]
- [36]
-
[37]
Elias Frantar and Dan Alistarh. 2023. SparseGPT: massive language models can be accurately pruned in one-shot. InInternational Conference on Machine Learning (ICML’23). PMLR
2023
-
[38]
E. Frew, T. McGee, ZuWhan Kim, Xiao Xiao, S. Jackson, M. Morimoto, S. Rathi- nam, J. Padial, and R. Sengupta. 2004. Vision-based road-following using a small autonomous aircraft. In2004 IEEE Aerospace Conference Proceedings, Vol. 5. 3006–3015 Vol.5
2004
-
[39]
Jerome H. Friedman and Bogdan E. Popescu. 2008. Predictive learning via rule ensembles.The Annals of Applied Statistics2, 3 (Sept. 2008). doi:10.1214/07- aoas148
work page doi:10.1214/07- 2008
-
[40]
GAS Student Satellite Team. 2022. GASPACS CubeSat. https://artsci.usu.edu/ physics/gas/projects/gaspacs Accessed: 2026-03-24. Formica et al
2022
-
[41]
Păsăreanu, and Ankur Taly
Divya Gopinath, Hayes Converse, Corina S. Păsăreanu, and Ankur Taly. 2020. Property inference for deep neural networks. In34th IEEE/ACM International Conference on Automated Software Engineering (ASE ’19). IEEE Press, 797–809
2020
-
[42]
Divya Gopinath, Luca Lungeanu, Ravi Mangal, Corina Păsăreanu, Siqi Xie, and Huafeng Yu. 2023. Feature-Guided Analysis of Neural Networks. InFundamental Approaches to Software Engineering. 133–142
2023
-
[43]
Prophecy: Inferring Formal Properties from Neuron Activations
Divya Gopinath, Corina S. Pasareanu, and Muhammad Usman. 2025. Prophecy: Inferring Formal Properties from Neuron Activations. arXiv:2509.21677 [cs.LG] https://arxiv.org/abs/2509.21677
work page internal anchor Pith review Pith/arXiv arXiv 2025
- [44]
-
[45]
Song Han, Huizi Mao, and William J. Dally. 2016. Deep Compression: Compress- ing Deep Neural Network with Pruning, Trained Quantization and Huffman Coding. InInternational Conference on Learning Representations, ICLR
2016
-
[46]
Yang He and Lingao Xiao. 2024. Structured Pruning for Deep Convolutional Neural Networks: A Survey.IEEE Transactions on Pattern Analysis and Machine Intelligence46, 5 (2024), 2900–2919. doi:10.1109/TPAMI.2023.3334614
-
[47]
Zhaojing Huang, Luis Fernando Herbozo Contreras, Wing Hang Leung, et al
-
[48]
doi:10.1007/s12265-024-10504-y
Efficient Edge-AI Models for Robust ECG Abnormality Detection on Resource-Constrained Hardware.Journal of Cardiovascular Translational Re- search17, 4 (2024), 879–892. doi:10.1007/s12265-024-10504-y
-
[49]
Zehao Huang and Naiyan Wang. 2018. Data-Driven Sparse Structure Selection for Deep Neural Networks. InEuropean Conference on Computer Vision (ECCV). Springer-Verlag, Berlin, Heidelberg, 317–334
2018
-
[50]
Apple Inc. 2023. Voice Trigger System for Siri. https://machinelearning.apple. com/research/voice-trigger. Accessed: 2026-03-24
2023
-
[51]
Apple Inc. 2024. Introducing Apple’s On-Device and Server Foundation Mod- els. https://machinelearning.apple.com/research/introducing-apple-foundation- models. Accessed: 2026-03-24
2024
-
[52]
Amazon Inc. 2025. On-device speech processing makes Alexa faster, lower- bandwidth. https://www.amazon.science/blog/on-device-speech-processing- makes-alexa-faster-lower-bandwidth. Accessed: 2026-03-24
2025
-
[53]
Andrew Janowczyk and Anant Madabhushi. 2016. Deep learning for digital pathology image analysis: A comprehensive tutorial with selected use cases. Journal of Pathology Informatics7, 1 (2016), 29. doi:10.4103/2153-3539.186902
-
[54]
Been Kim, Martin Wattenberg, Justin Gilmer, Carrie Cai, James Wexler, Fernanda Viegas, and Rory sayres. 2018. Interpretability Beyond Feature Attribution: Quan- titative Testing with Concept Activation Vectors (TCAV). InInternational Con- ference on Machine Learning (Proceedings of Machine Learning Research, Vol. 80). PMLR, 2668–2677
2018
-
[55]
Peter Kriens and Tim Verbelen. 2022. What Machine Learning Can Learn From Software Modularity.Computer55, 9 (Sept. 2022), 35–42. doi:10.1109/mc.2022. 3160276
-
[56]
2009.Learning multiple layers of features from tiny images
Alex Krizhevsky. 2009.Learning multiple layers of features from tiny images. Technical Report. University of Toronto, Toronto, Canada
2009
-
[57]
Lecun, L
Y. Lecun, L. Bottou, Y. Bengio, and P. Haffner. 1998. Gradient-based learning applied to document recognition.Proc. IEEE86, 11 (1998), 2278–2324. doi:10. 1109/5.726791
1998
-
[58]
Jae Hee Lee, Georgii Mikriukov, Gesina Schwalbe, Stefan Wermter, and Diedrich Wolter. 2025. Concept-Based Explanations in Computer Vision: Where Are We and Where Could We Go?. InComputer Vision – ECCV 2024 Workshops. Springer Nature Switzerland, Cham, 266–287
2025
-
[59]
Namhoon Lee, Thalaiyasingam Ajanthan, and Philip H. S. Torr. 2019. Snip: single-Shot Network Pruning based on Connection sensitivity. InInternational Conference on Learning Representations, ICLR 2019. OpenReview.net
2019
-
[60]
Hao Li, Asim Kadav, Igor Durdanovic, Hanan Samet, and Hans Peter Graf. 2017. Pruning Filters for Efficient ConvNets. arXiv:1608.08710 [cs.CV] https://arxiv. org/abs/1608.08710
work page Pith review arXiv 2017
-
[61]
Mingxuan Li and Yuanxun Shao. 2021. Deep compression of neural networks for fault detection on Tennessee Eastman chemical processes. InInternational Conference on Big Data, Artificial Intelligence and Internet of Things Engineering (ICBAIE). IEEE, 476–481
2021
-
[62]
Yawei Li, Yulun Zhang, Radu Timofte, et al . 2023. NTIRE 2023 Challenge on Efficient Super-Resolution: Methods and Results. InConference on Computer Vision and Pattern Recognition (CVPR) Workshops. 1922–1960
2023
-
[63]
Zhuo Li, Hengyi Li, and Lin Meng. 2023. Model Compression for Deep Neural Networks: A Survey.Computers12, 3 (2023). doi:10.3390/computers12030060
-
[64]
Zhuohan Li, Eric Wallace, Sheng Shen, Kevin Lin, Kurt Keutzer, Dan Klein, and Joey Gonzalez. 2020. Train big, then compress: Rethinking model size for efficient training and inference of transformers. InInternational Conference on machine learning. PMLR, 5958–5968
2020
-
[65]
Liang, Chenyang Yang, and Brad A
Jenny T. Liang, Chenyang Yang, and Brad A. Myers. 2024. A Large-Scale Survey on the Usability of AI Programming Assistants: Successes and Challenges. In IEEE/ACM International Conference on Software Engineering (ICSE ’24). ACM, New York, NY, USA, Article 52, 13 pages
2024
-
[66]
Xinyin Ma, Gongfan Fang, and Xinchao Wang. 2023. LLM-pruner: on the struc- tural pruning of large language models. InInternational Conference on Neural Information Processing Systems (NIPS ’23). Curran Associates Inc., Red Hook, NY, USA, Article 950, 19 pages
2023
-
[67]
Ravi Mangal, Nina Narodytska, Divya Gopinath, Boyue Caroline Hu, Anirban Roy, Susmit Jha, and Corina S Păsăreanu. 2024. Concept-based analysis of neural networks via vision-language models. InInternational Symposium on AI Verification. Springer, 49–77
2024
-
[68]
Silverio Martínez-Fernández, Justus Bogner, Xavier Franch, Marc Oriol, Julien Siebert, Adam Trendowicz, Anna Maria Vollmer, and Stefan Wagner. 2022. Soft- ware Engineering for AI-Based Systems: A Survey.ACM Trans. Softw. Eng. Methodol.31, 2, Article 37e (April 2022), 59 pages. doi:10.1145/3487043
-
[69]
Tobias Meuser, Lauri Lovén, Monowar Bhuyan, et al. 2024. Revisiting Edge AI: Opportunities and Challenges.IEEE Internet Computing28, 4 (2024), 49–59. doi:10.1109/MIC.2024.3383758
-
[70]
Mazda Moayeri, Phillip Pope, Yogesh Balaji, and Soheil Feizi. 2022. A Com- prehensive Study of Image Classification Model Sensitivity to Foregrounds, Backgrounds, and Visual Attributes. InIEEE/CVF Conference on Computer Vision and Pattern Recognition
2022
-
[71]
Mazda Moayeri, Sahil Singla, and Soheil Feizi. 2022. Hard ImageNet: Segmenta- tions for Objects with Strong Spurious Cues. InAdvances in Neural Information Processing Systems, Vol. 35. Curran Associates, Inc., 10068–10077
2022
-
[72]
2025.Interpretable Machine Learning(3 ed.)
Christoph Molnar. 2025.Interpretable Machine Learning(3 ed.). https: //christophm.github.io/interpretable-ml-book
2025
-
[73]
Alonso, Javier Prieto, and Oscar García
Pablo Negre, Ricardo S. Alonso, Javier Prieto, and Oscar García. 2026. Video violence detection using pre-trained VGG19 combined with manual logic, LSTM layers and Bi-LSTM layers.Applied Intelligence56, 3 (2026), 72. doi:10.1007/ s10489-026-07122-3
2026
-
[74]
Dat Ngo, Hyun-Cheol Park, and Bongsoon Kang. 2025. Edge Intelligence: A Review of Deep Neural Network Inference in Resource-Limited Environments. Electronics14, 12 (2025). doi:10.3390/electronics14122495
-
[75]
Thanh-Hai Nguyen, Thanh-Nghia Nguyen, and Ba-Viet Ngo. 2022. A VGG- 19 Model with Transfer Learning and Image Segmentation for Classification of Tomato Leaf Disease.AgriEngineering4, 4 (2022), 871–887. doi:10.3390/ agriengineering4040056
2022
-
[76]
Chris Olah, Nick Cammarata, Ludwig Schubert, Gabriel Goh, Michael Petrov, and Shan Carter. 2020. Zoom In: An Introduction to Circuits.Distill5, 3 (March 2020). doi:10.23915/distill.00024.001
-
[77]
Eleonora Poeta, Gabriele Ciravegna, Eliana Pastor, Tania Cerquitelli, and Elena Baralis. 2025. Concept-based Explainable Artificial Intelligence: A Survey.ACM Comput. Surv.(Nov. 2025). doi:10.1145/3774643 Just Accepted
-
[78]
PyTorch Contributors. 2026. vgg19 - Torchvision 0.25 documenta- tion. https://docs.pytorch.org/vision/0.25/models/generated/torchvision.models. vgg19.html. Accessed: 2026-03-25
2026
-
[79]
Yongming Rao, Jiwen Lu, Ji Lin, and Jie Zhou. 2019. Runtime Network Routing for Efficient Image Classification.IEEE Trans. Pattern Anal. Mach. Intell.41, 10 (Oct. 2019), 2291–2304. doi:10.1109/TPAMI.2018.2878258
-
[80]
Saba Sajid, Peizhao Li, Li Zhang, Cao Jie, Asif Ali, and Farman Ullah. 2025. Leveraging VGG-19 for automated fruit classification in smart agriculture.PeerJ Computer Science11 (12 2025), e3391. doi:10.7717/peerj-cs.3391
discussion (0)
Sign in with ORCID, Apple, or X to comment. Anyone can read and Pith papers without signing in.