pith. machine review for the scientific record. sign in

arxiv: 2603.13182 · v2 · submitted 2026-03-13 · 💻 cs.CV

Recognition: no theorem link

Diffusion-Based Feature Denoising and Using NNMF for Robust Brain Tumor Classification

Authors on Pith no claims yet

Pith reviewed 2026-05-15 11:21 UTC · model grok-4.3

classification 💻 cs.CV
keywords brain tumor classificationMRINNMFdiffusion denoisingadversarial robustnessCNN classifierfeature selectionmedical imaging
0
0 comments X

The pith

NNMF feature extraction combined with diffusion-based purification enables robust brain tumor classification from MRI images against adversarial attacks

A machine-rendered reading of the paper's core claim, the machinery that carries it, and where it could break.

This paper proposes a framework that extracts compact features from MRI brain tumor images using Non-Negative Matrix Factorization, selects the most discriminative ones with statistical tests, and purifies them using a diffusion process before feeding into a lightweight CNN classifier. The goal is to maintain high classification accuracy on clean data while making the system resistant to adversarial perturbations that could mislead standard deep learning models in medical diagnosis. By operating in feature space after NNMF, the approach aims to preserve interpretability and effectiveness without relying on complex or heavy models. Results indicate competitive performance on standard accuracy metrics alongside marked improvements in robustness when tested against AutoAttack.

Core claim

The central claim is that integrating Non-Negative Matrix Factorization for interpretable feature representations, statistical selection of discriminative components, and a diffusion-based feature purification module allows lightweight CNNs to achieve competitive classification accuracy on brain tumor MRI images while significantly improving robustness to adversarial attacks generated by AutoAttack.

What carries the argument

Non-negative matrix factorization (NNMF) for extracting compact interpretable features from MRI data, followed by diffusion-based purification consisting of forward noise addition and a learned denoiser network applied before classification.

Load-bearing premise

That adding and then removing noise via the diffusion process eliminates adversarial perturbations while keeping the selected NNMF features' ability to distinguish between tumor types intact.

What would settle it

A demonstration that the robust accuracy under AutoAttack falls substantially below the clean accuracy or matches that of undefended models would show the purification step fails to provide the claimed protection.

Figures

Figures reproduced from arXiv: 2603.13182 by Hiba Adil Al-kharsan, R\'obert Rajk\'o.

Figure 1
Figure 1. Figure 1: Stages of the proposed framework 3.1. Preprocessing data set The data set was selected from Kaggle, a well-known platform for data science and machine learning research. This data set contains brain magnetic resonance images with identical segmentation masks, which are usually used to train and evaluate brain tumor segmentation models [19]. This data set consists of approximately 2,200 brain magnetic reson… view at source ↗
Figure 2
Figure 2. Figure 2: NNMF Basis Components (k = 15) . This figure shows the learned NNMF basis components got from the training data. Each basis image represents a non-negative spatial pattern that contributes to rebuilding brain MRI images. The components take meaningful anatomical structures such as skull boundaries, tissue distribution, and localized density variations. The variety across components indicates that NNMF deco… view at source ↗
Figure 3
Figure 3. Figure 3: Example Test Image and Its Normalized NNMF Feature Vector. This figure illustrates an example test MRI image from the normal class side, its matching normalized NNMF feature vector. The bar plot shows the activation power of each NNMF component for this image, highlighting that only a subset of components exhibits strong responses. This sparse and selective activation pattern explains that NNMF features en… view at source ↗
Figure 4
Figure 4. Figure 4: L2 Norm of Xtest After Normalization. This figure reports the L2 norm of all normalized test feature vectors. The values are tightly focused around one, confirming the right and stability of the normalization process. Ensuring unit-norm feature vectors is critical for fair comparison between samples and for robustness evaluation, as it blocks feature magnitude variations from controlling the classifier or … view at source ↗
Figure 5
Figure 5. Figure 5: Class-wise Mean NNMF Features (Normalized, TEST). This figure shows the mean activation of each NNMF component for normal and tumor classes after feature normalization. Clearly, differences can be noticed via several components, where certain features show continuously, higher activation for tumor samples, while others are more prominent for normal samples. These class-dependent activation patterns indicat… view at source ↗
Figure 6
Figure 6. Figure 6: Top-Activated TEST Samples per Component (Using Normalized Xtest: This figure appears to be the most strongly activated test sample for each NNMF component based on normalized feature vectors. For each component, the corresponding image and its activation value are viewed along with the class label. The results detect that some components are often activated by tumour images, while others respond more stro… view at source ↗
Figure 7
Figure 7. Figure 7: Top-15 Features – AUC. This figure shows the AUC rate of the top rate of NNMF features based on the feature selection operation. Each bar corresponds to a single NNMF component and affects its individual ability to distinguish tumor samples from normal ones. Higher AUC numbers indicate the most powerful discriminative ability, while values closer to 0.5 suggest limited separability. The score proves that s… view at source ↗
Figure 8
Figure 8. Figure 8: Effect Size vs Significance. This figure clarifies the relation between effect size and statistical significance for NNMF features. The horizontal axis marks Cohen’s d, where positive values correspond to higher activation in tumor samples and negative values point to higher activation in normal samples. The vertical axis performs the negative logarithm of the p-value gained from Welch’s t-test. Features w… view at source ↗
Figure 9
Figure 9. Figure 9: Top Feature Distributions (Normal vs Tumor) This figure displays boxplot visualizations of the maximum discriminative NNMF features, comparing their normalized distributions between normal and tumor classes. The plots detect visible differences in average and allocation spreads for selected features, providing visual confirmation of their discriminative stand. These distributions supplement the statistical… view at source ↗
Figure 10
Figure 10. Figure 10: Class Mean Heatmap (Top-15 Features) This figure summarizes the class-wise mean activation of the top rate NNMF features using a heatmap representation. Every column reacts to a selected NNMF component, while rows represent the normal and tumor classes. The color intensity reflects the average feature activation, enabling speedily identification of tumor-dominant and normal-dominant features. The observed… view at source ↗
Figure 11
Figure 11. Figure 11: Training Progress – Accuracy This figure shows the evolution of classification accuracy via training. The light curve represents raw mini-boost training accuracy, while the smoothed curve highlights the total way. Validation accuracy (black markers) is measured periodically and close to the training curve, indicating stable learning and limited overfitting. Accuracy climbs sharply during the early iterati… view at source ↗
Figure 12
Figure 12. Figure 12: Training Progress – Loss This figure reports the training and validation loss curves over iterations. The training loss reduces steadily, while the validation loss follows a similar downward trend with a slightly higher amount, which is expected. The parallel behaviour of both curves suggests that optimization is forward normally ,and the model generalizes reasonably well. The lack shows a big difference … view at source ↗
Figure 13
Figure 13. Figure 13: VAL Confusion Matrix (Acc ≈ 0.83). This confusion matrix abstract model’s performance on the validation set. Correct predictions show on the diagonal, while off-diagonal input corresponds to m wrong classifications. The matrix points a high rate of correctly classified normal and tumor samples, with errors basis occurring when normal images are predicted as tumor and vice versa. The total validation accur… view at source ↗
Figure 14
Figure 14. Figure 14: TEST Confusion Matrix (Acc ≈ 0.851). This confusion matrix reports final performance on the unseen test set. The majority of samples lie on the diagonal, return an overall accuracy ≈ 0.851. Although this accuracy may seem lower compared to some current approaches, it is substantial to confirm that many reported results are gained under standard (non-adversarial) situation and often depend on complex deep … view at source ↗
Figure 15
Figure 15. Figure 15: Clean vs diffused NNMF feature vectors at a late diffusion time step (t = 41). The figure shows the impact of forward diffusion noise on selected feature components, view how the original clean features xt,x0 are gradually corrupted to getting noisy features. trusted [PITH_FULL_IMAGE:figures/full_fig_p016_15.png] view at source ↗
Figure 16
Figure 16. Figure 16: Impact of diffusion time on NNMF features at various timesteps (t = 1, 10, 25, and 50). As the diffusion timestep increases, the injected noise becomes more dominant, leading to higher distortion and variability in the feature representations [PITH_FULL_IMAGE:figures/full_fig_p016_16.png] view at source ↗
Figure 17
Figure 17. Figure 17: allocation of NNMF feature values before and after diffusion. The histogram comparison highlights the increase in contrast and spread of feature values caused by the diffusion operation, indicating a deviation from the original feature multiple [PITH_FULL_IMAGE:figures/full_fig_p017_17.png] view at source ↗
Figure 18
Figure 18. Figure 18: Noise energy as a function of diffusion timestep. The plot shows the L2 distance between clean and noisy feature vectors, xt ∥ 2∥ x0, increasing with diffusion time, quantitative confirming the gradually damage introduced by the forward diffusion step. Although the accumulative diffusion table is not explicitly explained, its effect is inherently reflected in the progressive rise of the noise capacity, wh… view at source ↗
Figure 19
Figure 19. Figure 19: Feature denoising example at diffusion step t=41. The clean feature vector X0, its diffused version xt, and the denoiser output x0ˆ are drawn to explain how the network suppresses diffusion noise and movement, acting closer to the clean features. 3.6. Feature-Space Denoiser Training After the construction of diffusion-corrupted feature pairs in the previous step, this stage focuses on learning a feature-l… view at source ↗
Figure 20
Figure 20. Figure 20: Denoising error is lowering relative to the noisy information. Each point contrast the noisy reconstruc￾tion error, ∥xt − x0∥2 (x-axis), versus the denoised reconstruction error, ∥xˆ0 − x0∥2 (y-axis). Points lying below the identity streak indicate successful fault decrees after denoising, demonstrating that the denoiser effectively returns features near the original clean representation [PITH_FULL_IMAGE… view at source ↗
Figure 21
Figure 21. Figure 21: Denoiser reconstruction error versus diffusion time. The plot appears x0∥2∥xˆ0 as a function of timestep t, highlighting how denoising hard changes with increasing diffusion force [PITH_FULL_IMAGE:figures/full_fig_p019_21.png] view at source ↗
Figure 22
Figure 22. Figure 22: Clean vs. diffusion-defended NNMF feature vector example on the test set, explain how the refine step alters the feature profile after forward noise and denoising at purt [PITH_FULL_IMAGE:figures/full_fig_p020_22.png] view at source ↗
Figure 23
Figure 23. Figure 23: a) Confusion matrix on the test set using clean (non-defended) NNMF features, displays class-wise predicate outcomes for normal and tumor. b) Confusion matrix on the test set after implementation of diffusion￾based feature refinement (defended features), highlighting changes in misclassification patterns compared to the clean state [PITH_FULL_IMAGE:figures/full_fig_p020_23.png] view at source ↗
Figure 24
Figure 24. Figure 24: Test accuracy comparison between clean features and diffusion-defended (refine) features, quantifying the net effect of the defense on standard classification accuracy. denoiser + classifier). Since defense is random due to injected noise, Expectation over Transformation (EOT) is applied by medium predictions over multiple random samples (K=8). The robustness is according to the attack and as the final ro… view at source ↗
Figure 25
Figure 25. Figure 25: Clean and robust accuracy under AutoAttack (L∞, ϵ = 0.10) for the clean model and the suggested diffusion-based defense. The robust accuracy corresponds to the final AutoAttack score (minimum across the evaluated attacks) [PITH_FULL_IMAGE:figures/full_fig_p022_25.png] view at source ↗
Figure 26
Figure 26. Figure 26: Robust accuracy per AutoAttack component (APGD-CE and Square) for the baseline and defended models (L∞, ϵ = 0.10). The final robustness is calculated as the minimum accuracy across attacks [PITH_FULL_IMAGE:figures/full_fig_p022_26.png] view at source ↗
Figure 27
Figure 27. Figure 27: Accuracy decline under AutoAttack for baseline versus defended models (L∞, ϵ=0.10). The diffusion￾based defense noticeably reduces the drop from clean to robust performance. 3.9.3. Log-Loss Log-Loss (cross-entropy loss) calculates the chance of predicted probabilities Given the true labels: LogLoss = − 1 N N ∑ i=1 [yi log(pi) + (1 − yi)log(1 − pi)] (5) Log-Loss with difficulty penalizes overconfident untr… view at source ↗
Figure 28
Figure 28. Figure 28: Comparison of classification and probabilistic metrics for baseline and diffusion-based defense under clean and adversarial settings. Higher values indicate better performance except for Brier Score and Log-Loss, where lower values are preferred [PITH_FULL_IMAGE:figures/full_fig_p025_28.png] view at source ↗
read the original abstract

Brain tumor classification from magnetic resonance imaging, which is also known as MRI, plays a sensitive role in computer-assisted diagnosis systems. In recent years, deep learning models have achieved high classification accuracy. However, their sensitivity to adversarial perturbations has become an important reliability concern in medical applications. This study suggests a robust brain tumor classification framework that combines Non-Negative Matrix Factorization (NNMF or NMF), lightweight convolutional neural networks (CNNs), and diffusion-based feature purification. Initially, MRI images are preprocessed and converted into a non-negative data matrix, from which compact and interpretable NNMF feature representations are extracted. Statistical metrics, including AUC, Cohen's d, and p-values, are used to rank and choose the most discriminative components. Then, a lightweight CNN classifier is trained directly on the selected feature groups. To improve adversarial robustness, a diffusion-based feature-space purification module is introduced. A forward noise method followed by a learned denoiser network is used before classification. System performance is estimated using both clean accuracy and robust accuracy under powerful adversarial attacks created by AutoAttack. The experimental results show that the proposed framework achieves competitive classification performance while significantly enhancing robustness against adversarial perturbations.The findings presuppose that combining interpretable NNMF-based representations with a lightweight deep approach and diffusion-based defense technique supplies an effective and reliable solution for medical image classification under adversarial conditions.

Editorial analysis

A structured set of objections, weighed in public.

Desk editor's note, referee report, simulated authors' rebuttal, and a circularity audit. Tearing a paper down is the easy half of reading it; the pith above is the substance, this is the friction.

Referee Report

2 major / 2 minor

Summary. The manuscript proposes a framework for robust brain tumor classification from MRI that first preprocesses images into a non-negative matrix, extracts compact features via Non-Negative Matrix Factorization (NNMF), ranks and selects the most discriminative components using AUC, Cohen's d, and p-values, trains a lightweight CNN classifier on the selected features, and applies a diffusion-based purification step (forward noise addition followed by a learned denoiser) immediately before classification to defend against adversarial attacks generated by AutoAttack. The central claim is that this pipeline delivers competitive clean accuracy together with substantially improved robust accuracy under adversarial perturbations.

Significance. If the missing quantitative results and implementation details were supplied and the claims held, the work would offer a concrete, interpretable route to adversarial robustness in medical imaging by combining low-rank factorization with diffusion-based feature-space denoising. Such a hybrid approach could be valuable for clinical CAD systems where both diagnostic performance and reliability against perturbations matter.

major comments (2)
  1. [Abstract] Abstract: The text asserts that 'the experimental results show that the proposed framework achieves competitive classification performance while significantly enhancing robustness against adversarial perturbations' yet supplies no numerical values whatsoever (clean accuracy, robust accuracy under AutoAttack, baseline comparisons, or statistical significance). This absence renders the headline claim unevaluable and is load-bearing for the entire contribution.
  2. [Methods (diffusion purification)] Diffusion-based purification module (methods): No information is given on (a) the noise schedule or variance schedule used in the forward process, (b) whether the denoiser was trained on clean NNMF features, Gaussian-noisy features, or adversarial features, or (c) any auxiliary loss that preserves the statistical ranking (AUC/Cohen’s d) of the selected NNMF components. Without these details the robustness claim cannot be assessed and may be falsified if the denoiser either fails to invert structured AutoAttack perturbations or smooths away the low-rank discriminative directions.
minor comments (2)
  1. [Abstract] Abstract: The parenthetical 'NNMF or NMF' is redundant; standard usage is NMF for Non-negative Matrix Factorization. Consistent terminology throughout would improve readability.
  2. [Abstract] Abstract: The final sentence uses 'supplies' where 'provides' or 'offers' would be more idiomatic; this is a minor stylistic point.

Simulated Author's Rebuttal

2 responses · 0 unresolved

We appreciate the referee's thorough review and constructive feedback on our manuscript. We address each of the major comments below and have made revisions to the manuscript to improve clarity and completeness.

read point-by-point responses
  1. Referee: [Abstract] Abstract: The text asserts that 'the experimental results show that the proposed framework achieves competitive classification performance while significantly enhancing robustness against adversarial perturbations' yet supplies no numerical values whatsoever (clean accuracy, robust accuracy under AutoAttack, baseline comparisons, or statistical significance). This absence renders the headline claim unevaluable and is load-bearing for the entire contribution.

    Authors: We agree that the abstract would be strengthened by the inclusion of specific numerical results to support the claims. In the revised manuscript, we have updated the abstract to include key quantitative findings from our experiments, such as the clean and robust accuracies achieved by the proposed framework along with baseline comparisons. revision: yes

  2. Referee: [Methods (diffusion purification)] Diffusion-based purification module (methods): No information is given on (a) the noise schedule or variance schedule used in the forward process, (b) whether the denoiser was trained on clean NNMF features, Gaussian-noisy features, or adversarial features, or (c) any auxiliary loss that preserves the statistical ranking (AUC/Cohen’s d) of the selected NNMF components. Without these details the robustness claim cannot be assessed and may be falsified if the denoiser either fails to invert structured AutoAttack perturbations or smooths away the low-rank discriminative directions.

    Authors: We thank the referee for highlighting the lack of implementation details regarding the diffusion-based purification module. These details are important for reproducibility and assessment of the claims. We have revised the Methods section to provide information on the noise schedule used in the forward process, the training data for the denoiser, and the loss functions employed, including any auxiliary terms to maintain the discriminative properties of the selected features. revision: yes

Circularity Check

0 steps flagged

No circularity: linear empirical pipeline

full rationale

The manuscript presents a sequential processing pipeline—MRI preprocessing to non-negative matrix, NNMF factorization, statistical ranking of components via AUC/Cohen’s d/p-values, lightweight CNN training, and a separate diffusion forward-noise + learned denoiser step—without any derivation, equation, or uniqueness claim that reduces to its own inputs. No self-citation is invoked to justify a load-bearing mathematical step, no fitted parameter is relabeled as a prediction, and no ansatz is smuggled via prior work. The robustness result is asserted via reported clean and AutoAttack accuracies rather than by construction from the preceding stages. The framework is therefore self-contained as an empirical composition of standard techniques.

Axiom & Free-Parameter Ledger

1 free parameters · 2 axioms · 0 invented entities

The framework depends on standard domain assumptions about matrix factorization and diffusion models without introducing new entities or many fitted parameters beyond component selection.

free parameters (1)
  • Number of NNMF components
    Determined by ranking using AUC, Cohen's d, and p-values to select most discriminative features.
axioms (2)
  • domain assumption MRI image data can be effectively represented and decomposed using non-negative matrix factorization.
    Invoked in the initial feature extraction step from preprocessed images.
  • domain assumption A diffusion process can be used to purify features by adding and then removing noise to counter adversarial perturbations.
    Basis for the robustness enhancement module.

pith-pipeline@v0.9.0 · 5550 in / 1246 out tokens · 76270 ms · 2026-05-15T11:21:04.142464+00:00 · methodology

discussion (0)

Sign in with ORCID, Apple, or X to comment. Anyone can read and Pith papers without signing in.

Reference graph

Works this paper leans on

30 extracted references · 30 canonical work pages · 3 internal anchors

  1. [1]

    Diffusion-Based Feature Denoising and Using NNMF for Robust Brain Tumor Classification

    Introduction The classification of brain tumors from magnetic resonance imaging (MRI) is a large and complex component in computer-supported diagnostic systems. Early and careful detection improves handling, design, and patient survival. In recent years, deep learning approaches, mostly convolutional neural networks (CNNs), have shown remarkable performan...

  2. [2]

    [9] proposed a deep learning framework for brain tumor classification using MRI images, integrating convolutional neural networks with data augmentation techniques

    Related Works • Deep Learning-Based Brain Tumor Classification (Recent Advances) Almuhaimeed et al. [9] proposed a deep learning framework for brain tumor classification using MRI images, integrating convolutional neural networks with data augmentation techniques. Their model achieved high classification accuracy exceeding 97%, demonstrating the effective...

  3. [3]

    COCO annotation

    Materials and Methods To develop a robust and adversarial classifier, a structured sequence of fundamental phases is required for the proper implementation of a machine learning model. In this work, a neural network–based model is adopted as the core classification algorithm. The proposed classifier is constructed through four main stages, each comprising...

  4. [4]

    MATLAB was applied to NNMF for the extraction of features, statistical classification, CNN training, and a diffusion-based purification system

    Implementation and Computational Performance Analysis The suggested framework was running using a combined MATLAB–Python pipeline. MATLAB was applied to NNMF for the extraction of features, statistical classification, CNN training, and a diffusion-based purification system. The trained models were exported to the ONNX format and run in Python using PyTorc...

  5. [5]

    Unlike a traditional end-to-end deep learning example

    Conclusion This study approaches a robust and structured framework for brain tumor classification based on NNMF feature extraction, statistical feature selection, CNN-based classification, and diffusion-based feature purification. Unlike a traditional end-to-end deep learning example. that builds just on high- 26 of 27 dimensional image input, the propose...

  6. [6]

    Acknowledgement This work was supported by the Distinguished Professor Program of Óbuda University. The authors are also grateful for the possibility of using the HUN-REN Cloud https://science-cloud.hu/ en [24] which helped us achieve some particular results published in this paper. Hiba Adil Al-kharsan gratefully acknowledges the financial support of the...

  7. [7]

    A survey on deep learning in medical image analysis

    Litjens, G.; Kooi, T.; Bejnordi, B.E.; Setio, A.A.A.; Ciomp, F.; Ghafoorian, M.; van der Laak, J.A.; van Ginneke, B.; Sánchez, C.I. A survey on deep learning in medical image analysis.Medical Image Analysis2017,42, 60–88. https://doi.org/10.1016/j.media.2017.07.005

  8. [8]

    Intriguing properties of neural networks

    Szegedy, C.; Zaremba, W.; Sutskever, I.; Bruna, J.; Erhan, D.; Goodfellow, I.; Fergus, R. Intriguing properties of neural networks. 2013, International Conference on Learning Representations (ICLR) 2014. https: //doi.org/10.48550/arXiv.1312.6199

  9. [9]

    1904.01361

    Croce, F.; Hein, M. Reliable evaluation of adversarial robustness with an ensemble of diverse parameter-free attacks. 2020, International Conference on Machine Learning (ICML) 2020. https://doi.org/10.48550/arXiv. 2003.01690

  10. [10]

    An introduction to variable and feature selection.Journal of Machine Learning Research 2003,3, 1157–1182

    Guyon, I.; Elisseeff, A. An introduction to variable and feature selection.Journal of Machine Learning Research 2003,3, 1157–1182. http://www.jmlr.org/papers/volume3/guyon03a/guyon03a.pdf

  11. [11]

    Development of partial least squares regression with discriminant analysis for software bug prediction.Heliyon2024,10, e35045

    Rajkó, R.; Siket, I.; Heged˝ us, P .; Ferenc, R. Development of partial least squares regression with discriminant analysis for software bug prediction.Heliyon2024,10, e35045. https://doi.org/10.1016/j.heliyon.2024.e350 45

  12. [12]

    Learning the parts of objects by non-negative matrix factorization.Nature1999, 401, 788–791

    Lee, D.D.; Seung, H.S. Learning the parts of objects by non-negative matrix factorization.Nature1999, 401, 788–791. https://doi.org/10.1038/44565

  13. [13]

    https://doi.org/10.1137/1.9781611976410

    Gillis, N.Nonnegative Matrix Factorization; SIAM, 2020. https://doi.org/10.1137/1.9781611976410

  14. [14]

    AutoAttack: Reliable evaluation of adversarial robustness

    Croce, F.; Hein, M. AutoAttack: Reliable evaluation of adversarial robustness. https://github.com/fra31/ auto-attack, 2020. Last accessed: 10.03.2026

  15. [15]

    Brain Tumor Classification Using Deep Learning and Data Augmentation Techniques.Frontiers in Medicine2025

    Almuhaimeed, A.; Alenezi, F.; Alotaibi, A. Brain Tumor Classification Using Deep Learning and Data Augmentation Techniques.Frontiers in Medicine2025. https://doi.org/10.3389/fmed.2025.1635796

  16. [16]

    Enhanced Multi-Class Brain Tumor Classification Using Hybrid CNN and Transformer Models.Technologies2025

    Gómez, J.; Martínez, C.; Fernández, L. Enhanced Multi-Class Brain Tumor Classification Using Hybrid CNN and Transformer Models.Technologies2025. https://doi.org/10.3390/technologies13090379. 27 of 27

  17. [17]

    Robustness Analysis of Brain Tumor Classification Models Under Adversarial Attacks.arXiv2026

    Deem, M.; Johnson, S.; Kim, D. Robustness Analysis of Brain Tumor Classification Models Under Adversarial Attacks.arXiv2026. https://doi.org/10.48550/arXiv.2602.11646

  18. [18]

    Brain tumor detection using Convolutional Neural Network

    Hossain, T.; Shishir, F.S.; Ashraf, M.; Al Nasim, M.A.; Shah, F.M. Brain tumor detection using Convolutional Neural Network. 2020, 1st International Conference on Advances in Science, Engineering and Robotics Technology (ICASERT-2019). https://doi.org/10.1109/ICASERT.2019.8934561

  19. [19]

    Non-Negative Matrix Factorization-Convolutional Neural Network (NMF- CNN) for sound event detection

    Chan, T.K.; Chin, C.S.; Li, Y. Non-Negative Matrix Factorization-Convolutional Neural Network (NMF- CNN) for sound event detection. 2020, Detection and Classification of Acoustic Scenes and Events (DCASE) 2019 Challenge. https://doi.org/10.48550/arXiv.2001.07874

  20. [20]

    Semi-NMF network for image classification.IEEE Access2019, 7, 8899–8903

    Huang, H.; Yang, Z.; Liang, N.; Li, Z. Semi-NMF network for image classification.IEEE Access2019, 7, 8899–8903. https://doi.org/10.23919/ChiCC.2019.8866590

  21. [21]

    Classification-denoising networks, 2024

    Thiry, L.; Guth, F. Classification-denoising networks, 2024. https://doi.org/10.48550/arXiv.2410.03505

  22. [22]

    Based Syst.294 (2024), 111787

    Ahamed, M.F.; Hossain, M.M.; Nahiduzzaman, M.; Islam, M.R.; Islam, M.R.; Ahsan, M.; Haider, J. A review on brain tumor segmentation based on deep learning methods with federated learning techniques. Computerized Medical Imaging and Graphics2023,110, 102313. https://doi.org/https://doi.org/10.1016/j. compmedimag.2023.102313

  23. [23]

    A systematic review of the hybrid machine learning models for brain tumour segmentation and detection in medical images.Frontiers in Artificial Intelligence2025,Volume 8 - 2025

    Netshamutshedzi, N.; Netshikweta, R.; Ndogmo, J.C.; Obagbuwa, I.C. A systematic review of the hybrid machine learning models for brain tumour segmentation and detection in medical images.Frontiers in Artificial Intelligence2025,Volume 8 - 2025. https://doi.org/10.3389/frai.2025.1615550

  24. [24]

    Deep Learning Approaches for Brain Tumor Classification in MRI Scans: An Analysis of Model Interpretability.Applied Sciences2026,16

    Gomes, E.F.; Barbosa, R.S. Deep Learning Approaches for Brain Tumor Classification in MRI Scans: An Analysis of Model Interpretability.Applied Sciences2026,16. https://doi.org/10.3390/app16020831

  25. [25]

    Brain tumor image dataset: Semantic segmentation

    Darabi, P .K. Brain tumor image dataset: Semantic segmentation. https://www.kaggle.com/datasets/ pkdarabi/brain-tumor-image-dataset-semantic-segmentation, 2023

  26. [26]

    Effect of data leakage in brain MRI classification using 2D convolutional neural networks

    Yagis, E.; Atnafu, S.W.; García Seco de Herrera, A.; Marzi, C.; Scheda, R.; Giannelli, M.; Tessa, C.; Citi, L.; Diciotti, S. Effect of data leakage in brain MRI classification using 2D convolutional neural networks. Scientific Reports2021,11. https://doi.org/10.1038/s41598-021-01681-w

  27. [27]

    Lee, S.; Pang, H.S. Feature extraction based on the Non-Negative Matrix Factorization of Convolutional Neural Networks for monitoring domestic activity with acoustic signals.IEEE Access2020,8, 122384–122395. https://doi.org/10.1109/ACCESS.2020.3007199

  28. [28]

    Algorithms for Non-negative Matrix Factorization

    Lee, D.D.; Seung, H.S. Algorithms for Non-negative Matrix Factorization. In Proceedings of the Advances in Neural Information Processing Systems. MIT Press, 2000, Vol. 13. https://proceedings.neurips.cc/paper_ files/paper/2000/file/f9d1152547c0bde01830b7e8bd60024c-Paper.pdf

  29. [29]

    It took me 6 years to find the best metric for classification models

    Mazzanti, S. It took me 6 years to find the best metric for classification models. https://medium.com/data- science-collective/it-took-me-6-years-to-find-the-best-metric-for-classification-models-0f5aa21a2b85, 2023. Last accessed: 10.03.2026

  30. [30]

    The Past, Present and Future of the ELKH Cloud.Információs Társadalom2022,22, 128–137

    Héder, M.; Rigó, E.; Medgyesi, D.; Lovas, R.; Tenczer, S.; Török, F.; Farkas, A.; Em˝ odi, M.; Kadlecsik, J.; Mez˝ o, G.; et al. The Past, Present and Future of the ELKH Cloud.Információs Társadalom2022,22, 128–137. https://doi.org/10.22503/inftars.xxii.2022.2.8