pith. machine review for the scientific record. sign in

arxiv: 2604.20026 · v1 · submitted 2026-04-21 · 💻 cs.CV

Recognition: unknown

Investigation of cardinality classification for bacterial colony counting using explainable artificial intelligence

Authors on Pith no claims yet

Pith reviewed 2026-05-10 02:08 UTC · model grok-4.3

classification 💻 cs.CV
keywords bacterial colony countingcardinality classificationexplainable AIXAIMicrobiaNetvisual similarityneural network classifiersimbalanced datasets
0
0 comments X

The pith

Explainable AI shows high visual similarity between colony classes blocks further gains in bacterial counting accuracy.

A machine-rendered reading of the paper's core claim, the machinery that carries it, and where it could break.

The paper uses explainable artificial intelligence to inspect the MicrobiaNet model, which classifies bacterial colonies by their exact count per image. It finds that colonies of different cardinalities look too alike for the network to separate them reliably, especially when three or more are present. This data property, not the model architecture itself, is presented as the main barrier to better results. The authors therefore revise earlier claims that MicrobiaNet was fundamentally limited and recommend shifting toward similarity-aware models or density estimation methods. Readers interested in lab automation would see this as a concrete diagnosis of why current computer-vision counters plateau on crowded plates.

Core claim

Applying XAI techniques to MicrobiaNet demonstrates that high visual similarity across cardinality classes in the colony images is the dominant factor preventing accurate classification of groups with three or more individuals, rather than shortcomings in the network or training procedure; this revises prior assertions that the model itself was the primary obstacle.

What carries the argument

Explainable AI analysis of the MicrobiaNet cardinality classifier to isolate the role of visual similarity in classification errors

If this is right

  • Models that directly incorporate measures of visual similarity between classes should yield higher accuracy on high-cardinality colony images.
  • Density estimation methods may outperform direct cardinality classification when objects within an image are visually similar.
  • The same visual-similarity bottleneck likely affects other neural-network classifiers trained on imbalanced image datasets.

Where Pith is reading between the lines

These are editorial extensions of the paper, not claims the author makes directly.

  • Testing similarity-aware architectures on existing colony datasets would provide a direct check on whether addressing visual overlap lifts performance.
  • The finding may extend to other biological counting tasks where objects overlap or share textures, such as cell or particle enumeration.
  • Running the same XAI pipeline on alternative colony-counting networks could test whether visual similarity remains the limiting factor across architectures.

Load-bearing premise

That the explanations produced by the chosen XAI method correctly identify visual similarity as the true cause of errors instead of reflecting artifacts of the XAI technique or dataset.

What would settle it

Train a new classifier that explicitly encodes visual similarity between cardinality classes and measure whether its accuracy on colonies of three or more improves substantially over MicrobiaNet on the same test images.

Figures

Figures reproduced from arXiv: 2604.20026 by Allen Donald, Minghua Zheng, Na Helian, Peter C. R. Lane, Yi Sun.

Figure 1
Figure 1. Figure 1: Example segments from seven different classes. Some segments, such as the first [PITH_FULL_IMAGE:figures/full_fig_p006_1.png] view at source ↗
Figure 2
Figure 2. Figure 2: Masks for the segments in Figure 1 [PITH_FULL_IMAGE:figures/full_fig_p007_2.png] view at source ↗
Figure 3
Figure 3. Figure 3: Masked segments generated based on Figure 1 and Figure 2. Each colony image [PITH_FULL_IMAGE:figures/full_fig_p007_3.png] view at source ↗
Figure 4
Figure 4. Figure 4: MicrobiaNet architecture. MicrobiaNet is selected in this study to investigate its explainability, aim￾ing to identify factors that could inform strategies for further performance improvement with XAI. To the best of our knowledge, MicrobiaNet currently 8 [PITH_FULL_IMAGE:figures/full_fig_p008_4.png] view at source ↗
Figure 5
Figure 5. Figure 5: Loss value and F1 score during training on the MicrobiaS1 dataset. The differ [PITH_FULL_IMAGE:figures/full_fig_p014_5.png] view at source ↗
Figure 6
Figure 6. Figure 6: Confusion matrix for MicrobiaS1 validation results. Most misclassifications [PITH_FULL_IMAGE:figures/full_fig_p015_6.png] view at source ↗
Figure 7
Figure 7. Figure 7: Examples of incorrect predictions from the MicrobiaS1 validation set. Colonies [PITH_FULL_IMAGE:figures/full_fig_p016_7.png] view at source ↗
Figure 8
Figure 8. Figure 8: PCA-reduced (left) and t-SNE-reduced (right) representations of the last net [PITH_FULL_IMAGE:figures/full_fig_p017_8.png] view at source ↗
Figure 9
Figure 9. Figure 9: Same as Figure 8, but for the validation set. [PITH_FULL_IMAGE:figures/full_fig_p018_9.png] view at source ↗
Figure 10
Figure 10. Figure 10: Visualisation of feature maps that strongly activate the first convolutional [PITH_FULL_IMAGE:figures/full_fig_p019_10.png] view at source ↗
Figure 11
Figure 11. Figure 11: Same as Figure 10, but for the second convolutional kernel. [PITH_FULL_IMAGE:figures/full_fig_p019_11.png] view at source ↗
Figure 12
Figure 12. Figure 12: Class activation map visualisation for One-colony images. Highlighted regions [PITH_FULL_IMAGE:figures/full_fig_p020_12.png] view at source ↗
Figure 13
Figure 13. Figure 13: Confusion matrix for MicrobiaS1B1 training results. Most misclassifications [PITH_FULL_IMAGE:figures/full_fig_p023_13.png] view at source ↗
Figure 14
Figure 14. Figure 14: Confusion matrix for MicrobiaS1(B1) validation results. Most misclassifications [PITH_FULL_IMAGE:figures/full_fig_p024_14.png] view at source ↗
Figure 15
Figure 15. Figure 15: PCA-reduced (left) and t-SNE-reduced (right) representations of the last net [PITH_FULL_IMAGE:figures/full_fig_p025_15.png] view at source ↗
Figure 16
Figure 16. Figure 16: Same as Figure 15, but for the MicrobiaS1(B1) validation set. [PITH_FULL_IMAGE:figures/full_fig_p025_16.png] view at source ↗
Figure 17
Figure 17. Figure 17: Confusion matrices for MicrobiaS1C1 training results (left) and validation [PITH_FULL_IMAGE:figures/full_fig_p027_17.png] view at source ↗
Figure 18
Figure 18. Figure 18: Four-class confusion matrix obtained by converting the baseline seven-class Mi [PITH_FULL_IMAGE:figures/full_fig_p028_18.png] view at source ↗
Figure 19
Figure 19. Figure 19: PCA-reduced (left) and t-SNE-reduced (right) representations of the last net [PITH_FULL_IMAGE:figures/full_fig_p029_19.png] view at source ↗
Figure 20
Figure 20. Figure 20: Same as Figure 19, but for the validation set. [PITH_FULL_IMAGE:figures/full_fig_p029_20.png] view at source ↗
read the original abstract

Automatic bacterial colony counting is a highly sought-after technology in modern biological laboratories because it eliminates manual counting effort. Previous work has observed that MicrobiaNet, currently the best-performing cardinality classification model for colony counting, has difficulty distinguishing colonies of three or more individuals. However, it is unclear if this is due to properties of the data together with inherent characteristics of the MicrobiaNet model. By analysing MicrobiaNet with explainable artificial intelligence (XAI), we demonstrate that XAI can provide insights into how data properties constrain cardinality classification performance in colony counting. Our results show that high visual similarity across classes is the key issue hindering further performance improvement, revising prior assertions about MicrobiaNet. These findings suggest future work should focus on models that explicitly incorporate visual similarity or explore density estimation approaches, with broader implications for neural network classifiers trained on imbalanced datasets.

Editorial analysis

A structured set of objections, weighed in public.

Desk editor's note, referee report, simulated authors' rebuttal, and a circularity audit. Tearing a paper down is the easy half of reading it; the pith above is the substance, this is the friction.

Referee Report

2 major / 1 minor

Summary. The paper applies explainable AI (XAI) techniques to analyze the MicrobiaNet model for bacterial colony cardinality classification. It concludes that high visual similarity across cardinality classes (especially for counts of three or more) is the primary performance bottleneck, revising earlier interpretations of MicrobiaNet's limitations, and recommends future models that explicitly handle similarity or shift to density estimation.

Significance. If the XAI analysis is rigorously validated, the work offers a concrete case study of using post-hoc explanations to diagnose data-driven constraints on CNN performance in imbalanced visual classification tasks. This could inform better practices for colony counting automation and analogous problems in medical imaging or object counting where visual similarity and class imbalance coexist.

major comments (2)
  1. [Abstract and Results] The central claim that XAI demonstrates high visual similarity as the key limiter (revising prior MicrobiaNet assertions) lacks reported quantitative validation or controls for XAI artifacts. No fidelity metrics, counterfactual tests, or comparisons across XAI methods (e.g., gradient-based vs. perturbation-based) are described to establish that attributions reflect true data properties rather than method-specific biases or dataset collection artifacts.
  2. [Discussion] The assumption that XAI explanations reliably isolate visual similarity as the causal factor for errors on cardinality ≥3 is load-bearing but unsupported by explicit tests. Without ablation on similarity-reduced data, performance gains after targeted interventions, or human evaluation of explanations, the conclusion risks conflating correlation in saliency maps with causation.
minor comments (1)
  1. [Abstract] The abstract would benefit from naming the specific XAI technique(s) employed and at least one quantitative result (e.g., overlap scores or error correlation) to allow readers to gauge the strength of the visual-similarity finding immediately.

Simulated Author's Rebuttal

2 responses · 0 unresolved

We thank the referee for the constructive and detailed review. The comments highlight important aspects of rigor in XAI validation that we will address in the revision. Below we respond point by point to the major comments.

read point-by-point responses
  1. Referee: [Abstract and Results] The central claim that XAI demonstrates high visual similarity as the key limiter (revising prior MicrobiaNet assertions) lacks reported quantitative validation or controls for XAI artifacts. No fidelity metrics, counterfactual tests, or comparisons across XAI methods (e.g., gradient-based vs. perturbation-based) are described to establish that attributions reflect true data properties rather than method-specific biases or dataset collection artifacts.

    Authors: We agree that the original manuscript relies primarily on qualitative interpretation of XAI attributions without explicit quantitative controls. The analysis used established post-hoc methods to reveal consistent patterns of visual similarity across cardinality classes, which revises earlier model-centric interpretations. To strengthen this, we will add fidelity metrics (e.g., insertion/deletion scores), cross-method comparisons, and controls for potential artifacts in the revised manuscript. revision: yes

  2. Referee: [Discussion] The assumption that XAI explanations reliably isolate visual similarity as the causal factor for errors on cardinality ≥3 is load-bearing but unsupported by explicit tests. Without ablation on similarity-reduced data, performance gains after targeted interventions, or human evaluation of explanations, the conclusion risks conflating correlation in saliency maps with causation.

    Authors: The XAI results demonstrate a strong correlation between highlighted visual features and classification errors for higher cardinalities, supporting the revision of prior assertions. We acknowledge the absence of explicit causal tests such as ablations on similarity-reduced data. Generating such a dataset would require substantial new experimental effort beyond the current scope. We will expand the discussion to clarify the correlational nature of the findings, add suggestions for targeted interventions as future work, and note the value of human evaluation where feasible. revision: partial

Circularity Check

0 steps flagged

No circularity: XAI analysis applies external methods to pre-existing model and data

full rationale

The paper applies standard XAI techniques (e.g., saliency or attribution methods) to the existing MicrobiaNet model and colony-counting dataset to interpret why performance drops for cardinality classes >=3. The central claim—that high visual similarity across classes is the key limiter—is an empirical observation drawn from the resulting attributions rather than a quantity fitted to the data or presupposed by definition. No equations reduce the result to its inputs by construction, no parameters are renamed as predictions, and the analysis does not depend on load-bearing self-citations or uniqueness theorems from the authors' prior work. The derivation chain is therefore self-contained as an interpretive study using independent tools.

Axiom & Free-Parameter Ledger

0 free parameters · 0 axioms · 0 invented entities

The abstract describes an empirical XAI investigation with no new mathematical derivations, free parameters, or postulated entities; it relies on standard assumptions of XAI applicability to CNNs.

pith-pipeline@v0.9.0 · 5451 in / 1047 out tokens · 59865 ms · 2026-05-10T02:08:16.117644+00:00 · methodology

discussion (0)

Sign in with ORCID, Apple, or X to comment. Anyone can read and Pith papers without signing in.

Reference graph

Works this paper leans on

59 extracted references · 14 canonical work pages · 6 internal anchors

  1. [1]

    Deep Learning using Rectified Linear Units (ReLU)

    Agarap, A.F., 2018. DeepLearningusingRectifiedLinearUnits(ReLU). arXiv:1803.08375 [cs, stat]arXiv:1803.08375

  2. [2]

    An image-processing based automated bac- teria colony counter, in: 2009 24th International Symposium on Com- puter and Information Sciences, IEEE

    Ates, H., Gerek, O.N., 2009. An image-processing based automated bac- teria colony counter, in: 2009 24th International Symposium on Com- puter and Information Sciences, IEEE. pp. 18–23

  3. [3]

    Automated counting of mammalian cell colonies

    Barber, P.R., Vojnovic, B., Kelly, J., Mayes, C.R., Boulton, P., Wood- cock, M., Joiner, M.C., 2001. Automated counting of mammalian cell colonies. Physics in Medicine and Biology 46, 63–76

  4. [4]

    Predictingcom- mercially available antiviral drugs that may act on the novel coronavirus (SARS-CoV-2) through a drug-target interaction deep learning model

    Beck, B.R., Shin, B., Choi, Y., Park, S., Kang, K., 2020. Predictingcom- mercially available antiviral drugs that may act on the novel coronavirus (SARS-CoV-2) through a drug-target interaction deep learning model. Computational and Structural Biotechnology Journal 18, 784–790

  5. [5]

    YOLOv4: Optimal Speed and Accuracy of Object Detection

    Bochkovskiy, A., Wang, C.Y., Liao, H.Y.M., 2020. YOLOv4: Optimal Speed and Accuracy of Object Detection. arXivarXiv:2004.10934

  6. [6]

    Cascade R-CNN: Delving Into High QualityObjectDetection, in: 2018IEEE/CVFConferenceonComputer Vision and Pattern Recognition, pp

    Cai, Z., Vasconcelos, N., 2018. Cascade R-CNN: Delving Into High QualityObjectDetection, in: 2018IEEE/CVFConferenceonComputer Vision and Pattern Recognition, pp. 6154–6162

  7. [7]

    Chattopadhay, A., Sarkar, A., Howlader, P., Balasubramanian, V.N.,

  8. [8]

    Grad-CAM++: Generalized Gradient-Based Visual Explanations for Deep Convolutional Networks, in: 2018 IEEE Winter Conference on Applications of Computer Vision (WACV), pp. 839–847

  9. [9]

    An automated bacterial colony counting and classification system

    Chen, W.B., Zhang, C., 2009. An automated bacterial colony counting and classification system. Information Systems Frontiers 11, 349–368

  10. [10]

    Automated count- ing of bacterial colonies by image analysis

    Chiang, P.J., Tseng, M.J., He, Z.S., Li, C.H., 2015. Automated count- ing of bacterial colonies by image analysis. Journal of Microbiological Methods 108, 74–82.arXiv:quant-ph/0312207

  11. [11]

    High-Throughput Method for Automated Colony and Cell Counting by Digital Image Analysis Based on Edge Detection

    Choudhry, P., 2016. High-Throughput Method for Automated Colony and Cell Counting by Digital Image Analysis Based on Edge Detection. PLOS ONE 11, e0148469. 49

  12. [12]

    Natural Language Processing (Almost) from Scratch

    Collobert, R., Weston, J., Bottou, L., Karlen, M., Kavukcuoglu, K., Kuksa, P., 2011. Natural Language Processing (Almost) from Scratch. The Journal of Machine Learning Research 12, 2493–2537

  13. [13]

    Class-Balanced Loss Based on Effective Number of Samples, in: 2019 IEEE/CVF Con- ference on Computer Vision and Pattern Recognition (CVPR), pp

    Cui, Y., Jia, M., Lin, T.Y., Song, Y., Belongie, S., 2019. Class-Balanced Loss Based on Effective Number of Samples, in: 2019 IEEE/CVF Con- ference on Computer Vision and Pattern Recognition (CVPR), pp. 9260–9269

  14. [14]

    Ballard, 1981

    Dana H. Ballard, 1981. Generalizing the Hough Transform to Detect Arbitrary Shapes 13, 111–122

  15. [15]

    Use hirescam instead of grad-cam for faithful explanations of convolutional neural networks,

    Draelos, R.L., Carin, L., 2021. Use HiResCAM instead of Grad- CAM for faithful explanations of convolutional neural networks. arXiv:2011.08891

  16. [16]

    Bacterial colony counting by Convolutional Neural Networks, in: 2015 37th Annual International Conference of the IEEE Engineering in Medicine and Biology Society (EMBC), IEEE

    Ferrari, A., Lombardi, S., Signoroni, A., 2015. Bacterial colony counting by Convolutional Neural Networks, in: 2015 37th Annual International Conference of the IEEE Engineering in Medicine and Biology Society (EMBC), IEEE. pp. 7458–7461

  17. [17]

    Bacterial colony counting with Convolutional Neural Networks in Digital Microbiology Imaging

    Ferrari, A., Lombardi, S., Signoroni, A., 2017. Bacterial colony counting with Convolutional Neural Networks in Digital Microbiology Imaging. Pattern Recognition 61, 629–640

  18. [18]

    Learning to Count Cells: Applications to lens-free imaging of large fields , 1–6

    Flaccavento, G., Lempitsky, V., Pope, I., Barber, P., Zisserman, A., Noble, J., Vojnovic, B., 2011. Learning to Count Cells: Applications to lens-free imaging of large fields , 1–6

  19. [19]

    Available: https://arxiv.org/abs/2008.02312

    Fu, R., Hu, Q., Dong, X., Guo, Y., Gao, Y., Li, B., 2020. Axiom-based Grad-CAM: Towards Accurate Visualization and Explanation of CNNs. arXiv:2008.02312

  20. [20]

    OpenCFU, a New Free and Open-Source Software to Count Cell Colonies and Other Circular Objects

    Geissmann, Q., 2013. OpenCFU, a New Free and Open-Source Software to Count Cell Colonies and Other Circular Objects. PLoS ONE 8, 1–10

  21. [21]

    Pytorch library for cam methods

    Gildenblat, J., contributors, 2021. Pytorch library for cam methods. https://github.com/jacobgil/pytorch-grad-cam

  22. [22]

    Generative adversarial networks

    Goodfellow, I., Pouget-Abadie, J., Mirza, M., Xu, B., Warde-Farley, D., Ozair, S., Courville, A., Bengio, Y., 2020. Generative adversarial networks. Communications of the ACM 63, 139–144. 50

  23. [23]

    Deep Residual Learning for Image Recognition

    He, K., Zhang, X., Ren, S., Sun, J., 2015. Deep Residual Learning for ImageRecognition. 2016IEEEConferenceonComputerVisionandPat- tern Recognition (CVPR) 2016-Decem, 770–778.arXiv:1512.03385

  24. [24]

    Ioffe, S., Szegedy, C., 2015. Batch normalization: Accelerating deep network training by reducing internal covariate shift, in: Proceedings of the 32nd International Conference on International Conference on Machine Learning - Volume 37, JMLR.org, Lille, France. pp. 448–456

  25. [25]

    AutoCellSeg: Ro- bust automatic colony forming unit (CFU)/cell analysis using adaptive image segmentation and easy-to-use post-editing techniques

    Khan, A.U.M., Torelli, A., Wolf, I., Gretz, N., 2018. AutoCellSeg: Ro- bust automatic colony forming unit (CFU)/cell analysis using adaptive image segmentation and easy-to-use post-editing techniques. Scientific Reports 8, 7302

  26. [26]

    Adam: A Method for Stochastic Optimization

    Kingma, D.P., Ba, J., 2014. Adam: A Method for Stochastic Optimiza- tion. 3rd International Conference on Learning Representations, ICLR 2015 - Conference Track Proceedings , 1–15arXiv:1412.6980

  27. [27]

    ImageNet classifica- tion with deep convolutional neural networks, in: NIPS’12: Proceedings of the 25th International Conference on Neural Information Processing Systems, pp

    Krizhevsky, A., Sutskever, I., Hinton, G.E., 2012. ImageNet classifica- tion with deep convolutional neural networks, in: NIPS’12: Proceedings of the 25th International Conference on Neural Information Processing Systems, pp. 1097–1105

  28. [28]

    Learning to count objects in images

    Lempitsky, V., Zisserman, A., 2010. Learning to count objects in images. Advances in Neural Information Processing Systems 23: 24th Annual Conference on Neural Information Processing Systems 2010, NIPS 2010 , 1–9

  29. [29]

    CBNet: A Composite Backbone Network Architecture for Object Detection

    Liang, T., Chu, X., Liu, Y., Wang, Y., Tang, Z., Chu, W., Chen, J., Ling, H., 2022. CBNet: A Composite Backbone Network Architecture for Object Detection. IEEE Transactions on Image Processing 31, 6893– 6906

  30. [30]

    Focal Loss for Dense Object Detection

    Lin, T.Y., Goyal, P., Girshick, R., He, K., Dollár, P., 2020. Focal Loss for Dense Object Detection. IEEE Transactions on Pattern Analysis and Machine Intelligence 42, 318–327

  31. [31]

    An image analysis-based approach for automated counting of cancer cell nuclei in tissue sections

    Loukas, C.G., Wilson, G.D., Vojnovic, B., Linney, A., 2003. An image analysis-based approach for automated counting of cancer cell nuclei in tissue sections. Cytometry 55A, 30–42. 51

  32. [32]

    Visualizing Data using t-SNE

    van der Maaten, L., Hinton, G., 2008. Visualizing Data using t-SNE. Journal of Machine Learning Research 9, 2579–2605

  33. [33]

    Deep neural networks approach to microbial colony detection – a comparative analysis

    Majchrowska, S., Pawłowski, J., Czerep, N., Górecki, A., Kuciński, J., Golan, T., 2021a. Deep neural networks approach to microbial colony detection – a comparative analysis. arXiv:2108.10103 [cs, eess, q-bio] arXiv:2108.10103

  34. [34]

    arXiv:2108.01234 [cs, q-bio]arXiv:2108.01234

    Majchrowska, S., Pawłowski, J., Guła, G., Bonus, T., Hanas, A., Loch, A., Pawlak, A., Roszkowiak, J., Golan, T., Drulis-Kawa, Z., 2021b. AGAR a microbial colony dataset for deep learning detection. arXiv:2108.01234 [cs, q-bio]arXiv:2108.01234

  35. [35]

    Automatic particle and bacterial colony counter

    Mansberg, H.P., 1957. Automatic particle and bacterial colony counter. Science 126, 823–827

  36. [36]

    Semi-automatic prototype system for bacterial colony counting, in: 2016 International Conference on Smart Systems and Technologies (SST), IEEE

    Matic, T., Vidovic, I., Siladi, E., Tkalec, F., 2016. Semi-automatic prototype system for bacterial colony counting, in: 2016 International Conference on Smart Systems and Technologies (SST), IEEE. pp. 205– 210

  37. [37]

    Explainable artifi- cial intelligence: A comprehensive review

    Minh, D., Wang, H.X., Li, Y.F., Nguyen, T.N., 2022. Explainable artifi- cial intelligence: A comprehensive review. Artificial Intelligence Review 55, 3503–3568

  38. [38]

    Automated Bacteria Colony Counting on Agar Plates Using Machine Learning

    Mohammad Khan, F., Gupta, R., Sekhri, S., 2021. Automated Bacteria Colony Counting on Agar Plates Using Machine Learning. Journal of Environmental Engineering 147, 04021066

  39. [39]

    Eigen-CAM: Class Activation Map using Principal Components, in: 2020 International Joint Conference on Neural Networks (IJCNN), pp

    Muhammad, M.B., Yeasin, M., 2020. Eigen-CAM: Class Activation Map using Principal Components, in: 2020 International Joint Conference on Neural Networks (IJCNN), pp. 1–7

  40. [40]

    The Ten Commandments of Ethical Medical AI

    Muller, H., Mayrhofer, M.T., Van Veen, E.B., Holzinger, A., 2021. The Ten Commandments of Ethical Medical AI. Computer 54, 119–123

  41. [41]

    Counting colonies of clonogenic assays by using densitometric software

    Niyazi, M., Niyazi, I., Belka, C., 2007. Counting colonies of clonogenic assays by using densitometric software. Radiation Oncology 2, 3–5

  42. [42]

    Feature Visualization

    Olah, C., Mordvintsev, A., Schubert, L., 2017. Feature Visualization. Distill 2, e7. 52

  43. [43]

    Li- bra R-CNN: Towards Balanced Learning for Object Detection, in: 2019 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), pp

    Pang, J., Chen, K., Shi, J., Feng, H., Ouyang, W., Lin, D., 2019. Li- bra R-CNN: Towards Balanced Learning for Object Detection, in: 2019 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), pp. 821–830

  44. [44]

    PyTorch: An Imperative Style, High-Performance Deep Learning Library , 12

    Paszke, A., Gross, S., Massa, F., Lerer, A., Bradbury, J., Chanan, G., Killeen, T., Lin, Z., Gimelshein, N., Antiga, L., Desmaison, A., Kopf, A., Yang, E., DeVito, Z., Raison, M., Tejani, A., Chilamkurthy, S., Steiner, B., Fang, L., Bai, J., Chintala, S., 2019. PyTorch: An Imperative Style, High-Performance Deep Learning Library , 12

  45. [45]

    Scikit-learn: Machine Learning in Python

    Pedregosa, F., Varoquaux, G., Gramfort, A., Michel, V., Thirion, B., Grisel, O., Blondel, M., Prettenhofer, P., Weiss, R., Dubourg, V., Van- derplas, J., Passos, A., Cournapeau, D., 2011. Scikit-learn: Machine Learning in Python. Journal of Machine Learning Research 12, 2825– 2830

  46. [46]

    Learning To Count Everything, in: 2021 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), pp

    Ranjan, V., Sharma, U., Nguyen, T., Hoai, M., 2021. Learning To Count Everything, in: 2021 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), pp. 3393–3402

  47. [47]

    Girshick, and Jian Sun

    Ren, S., He, K., Girshick, R., Sun, J., 2017. Faster R-CNN: Towards Real-Time Object Detection with Region Proposal Networks. IEEE Transactions on Pattern Analysis and Machine Intelligence 39, 1137– 1149.arXiv:1506.01497

  48. [48]

    Transparency of deep neural networks for medical image analysis: A review of interpretability methods

    Salahuddin, Z., Woodruff, H.C., Chatterjee, A., Lambin, P., 2022. Transparency of deep neural networks for medical image analysis: A review of interpretability methods. Computers in Biology and Medicine 140, 105111

  49. [49]

    The European Le- gal Framework for Medical AI, in: Holzinger, A., Kieseberg, P., Tjoa, A.M., Weippl, E

    Schneeberger, D., Stöger, K., Holzinger, A., 2020. The European Le- gal Framework for Medical AI, in: Holzinger, A., Kieseberg, P., Tjoa, A.M., Weippl, E. (Eds.), Machine Learning and Knowledge Extraction, Springer International Publishing, Cham. pp. 209–226

  50. [50]

    Grad-CAM: Visual Explanations from Deep Networks via Gradient-Based Localization, in: 2017 IEEE International Conference on Computer Vision (ICCV), pp

    Selvaraju, R.R., Cogswell, M., Das, A., Vedantam, R., Parikh, D., Batra, D., 2017. Grad-CAM: Visual Explanations from Deep Networks via Gradient-Based Localization, in: 2017 IEEE International Conference on Computer Vision (ICCV), pp. 618–626. 53

  51. [51]

    Very Deep Convolutional Networks for Large-Scale Image Recognition

    Simonyan, K., Zisserman, A., 2014. Very Deep Convolutional Networks for Large-Scale Image Recognition.arXiv:1409.1556

  52. [52]

    Active learning strategies for phenotypic profiling of high-content screens

    Smith, K., Horvath, P., 2014. Active learning strategies for phenotypic profiling of high-content screens. Journal of Biomolecular Screening 19, 685–695

  53. [53]

    EfficientDet: Scalable and Efficient Object Detection, in: 2020 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), pp

    Tan, M., Pang, R., Le, Q.V., 2020. EfficientDet: Scalable and Efficient Object Detection, in: 2020 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), pp. 10778–10787

  54. [54]

    How to use t-sne effectively

    Wattenberg, M., Viégas, F., Johnson, I., 2016. How to use t-sne effectively. Distill URL:http://distill.pub/2016/misread-tsne, doi:10.23915/distill.00002

  55. [55]

    Detection and segmen- tation of cell nuclei in virtual microscopy images: A minimum-model approach

    Wienert, S., Heim, D., Saeger, K., Stenzinger, A., Beil, M., Hufnagl, P., Dietel, M., Denkert, C., Klauschen, F., 2012. Detection and segmen- tation of cell nuclei in virtual microscopy images: A minimum-model approach. Scientific Reports 2, 1–7

  56. [56]

    An Effective and Robust Method for Automatic Bacterial Colony Enumeration, in: International Conference on Semantic Computing (ICSC 2007), IEEE

    Zhang, C., Chen, W.B., 2007. An Effective and Robust Method for Automatic Bacterial Colony Enumeration, in: International Conference on Semantic Computing (ICSC 2007), IEEE. pp. 581–588

  57. [57]

    Zhou, B., Khosla, A., Lapedriza, A., Oliva, A., Torralba, A.,

  58. [58]

    arXiv:1512.04150 [cs]arXiv:1512.04150

    Learning Deep Features for Discriminative Localization. arXiv:1512.04150 [cs]arXiv:1512.04150

  59. [59]

    Deformable DETR: Deformable Transformers for End-to-End Object Detection

    Zhu, X., Su, W., Lu, L., Li, B., Wang, X., Dai, J., 2021. Deformable DETR: Deformable Transformers for End-to-End Object Detection. arXiv:2010.04159. 54