pith. machine review for the scientific record. sign in

arxiv: 2604.10293 · v1 · submitted 2026-04-11 · 📡 eess.SP

Recognition: unknown

Impact of Validation Strategy on Machine Learning Performance in EEG-Based Alcoholism Classification

Authors on Pith no claims yet

Pith reviewed 2026-05-10 15:38 UTC · model grok-4.3

classification 📡 eess.SP
keywords EEG classificationalcoholism detectionnested cross-validationvalidation biasmachine learningsupport vector machineAdaBoostbiomedical signals
0
0 comments X

The pith

Standard validation overestimates EEG alcoholism classification accuracy by about 5 percent.

A machine-rendered reading of the paper's core claim, the machinery that carries it, and where it could break.

The paper shows that ordinary cross-validation combined with global hyperparameter tuning produces inflated accuracy numbers when machine learning models classify alcoholic versus control EEG signals. Using a balanced collection of 300 trials and a mix of statistical and spectral features, the authors compare standard protocols to nested cross-validation that keeps tuning and testing separate. The clearest drop appears in the support vector machine with radial basis function kernel, which loses roughly 5 percent accuracy under the stricter protocol. Ensemble approaches remain steadier, and AdaBoost reaches 78.3 percent accuracy with balanced sensitivity and specificity. The work concludes that validation choices shape reported performance more than the choice of classifier itself.

Core claim

Conventional validation with global hyperparameter tuning introduces optimistic bias in EEG-based alcoholism classification. In particular, SVM with radial basis function kernel exhibited a performance decrease of approximately 5% under nested cross-validation, indicating overestimation. Ensemble-based methods showed more stable generalization, with AdaBoost achieving the highest performance, reaching 78.3% accuracy, an AUC of 0.868, and balanced sensitivity and specificity.

What carries the argument

Nested cross-validation protocol that separates hyperparameter tuning from final performance estimation, applied to five classifiers on statistical and spectral EEG features.

If this is right

  • Conventional validation overestimates performance, especially for SVMs using radial basis function kernels.
  • Ensemble methods such as AdaBoost deliver more stable results across validation strategies.
  • Most differences in accuracy between the tested models are not statistically significant.
  • Validation strategy acts as a primary determinant of perceived model performance in EEG analysis.
  • A reproducible framework combining statistical and spectral features enables more trustworthy evaluation of biomedical classifiers.

Where Pith is reading between the lines

These are editorial extensions of the paper, not claims the author makes directly.

  • The same optimistic bias likely occurs in other small EEG classification tasks that rely on standard validation.
  • Routine use of nested cross-validation in biomedical signal studies would reduce over-optimistic published accuracies.
  • The interaction between feature type and validation bias could be tested by repeating the comparison on different EEG feature sets.

Load-bearing premise

The 300-trial balanced dataset and chosen statistical plus spectral features are representative enough that the observed validation bias generalizes beyond this specific collection and preprocessing.

What would settle it

Applying the identical five classifiers and both validation protocols to an independent, larger EEG alcoholism dataset and finding no accuracy drop for the radial basis function SVM would falsify the claim of systematic optimistic bias.

Figures

Figures reproduced from arXiv: 2604.10293 by Omer Faruk Ertugrul, Tahir Cetin Akinci, Yuksel Celik.

Figure 1
Figure 1. Figure 1: Proposed validation-aware EEG analysis framework. The pipeline [PITH_FULL_IMAGE:figures/full_fig_p002_1.png] view at source ↗
Figure 3
Figure 3. Figure 3: Comparison of sensitivity and specificity across classification models. [PITH_FULL_IMAGE:figures/full_fig_p006_3.png] view at source ↗
Figure 2
Figure 2. Figure 2: Nested cross-validation accuracy comparison with standard deviation [PITH_FULL_IMAGE:figures/full_fig_p006_2.png] view at source ↗
Figure 6
Figure 6. Figure 6: ROC curves for all models under nested cross-validation. [PITH_FULL_IMAGE:figures/full_fig_p008_6.png] view at source ↗
Figure 5
Figure 5. Figure 5: Model stability analysis based on cross-fold variance. Lower standard [PITH_FULL_IMAGE:figures/full_fig_p008_5.png] view at source ↗
Figure 7
Figure 7. Figure 7: 2D projection of the most discriminative features. [PITH_FULL_IMAGE:figures/full_fig_p009_7.png] view at source ↗
read the original abstract

Electroencephalography provides a non-invasive and cost-effective approach for analyzing neural patterns associated with alcohol dependence. However, reported classification performance in EEG-based alcoholism studies varies considerably, often due to differences in validation strategies rather than intrinsic model capability. This study presents a validation-aware machine learning framework to assess the impact of evaluation methodology on classification performance. A balanced multi-channel EEG dataset of 300 trials (150 alcoholic, 150 control) was analyzed using a structured feature representation combining statistical descriptors and spectral band interactions. Five classifiers, including support vector machines (linear and radial basis function kernels), random forest, k-nearest neighbors, and AdaBoost, were evaluated under standard and nested cross-validation protocols. Results show that conventional validation with global hyperparameter tuning introduces optimistic bias. In particular, SVM with radial basis function kernel exhibited a performance decrease of approximately 5\% under nested cross-validation, indicating overestimation. Ensemble-based methods showed more stable generalization, with AdaBoost achieving the highest performance, reaching 78.3\% accuracy ($\pm$4.25), an AUC of 0.868, and balanced sensitivity (78.67\%) and specificity (81.33\%). These findings highlight that validation strategy is a primary determinant of perceived model performance. Statistical analysis using McNemar's test further shows that most performance differences between models are not statistically significant, emphasizing careful interpretation of classification results. The proposed framework provides a reproducible and robust basis for evaluating machine learning models in biomedical signal analysis.

Editorial analysis

A structured set of objections, weighed in public.

Desk editor's note, referee report, simulated authors' rebuttal, and a circularity audit. Tearing a paper down is the easy half of reading it; the pith above is the substance, this is the friction.

Referee Report

1 major / 3 minor

Summary. The paper claims that conventional validation with global hyperparameter tuning introduces optimistic bias in EEG-based alcoholism classification. On a balanced 300-trial multi-channel dataset using statistical and spectral features, five classifiers (SVM-linear, SVM-RBF, random forest, kNN, AdaBoost) were compared under standard vs. nested cross-validation. SVM-RBF showed an approximately 5% performance drop under nested CV, while AdaBoost was most stable at 78.3% accuracy (±4.25), AUC 0.868, with balanced sensitivity/specificity; McNemar's test indicated most inter-model differences are not statistically significant.

Significance. If the empirical comparison holds, the work demonstrates that validation strategy is a primary driver of reported performance variability in biomedical signal classification, providing concrete evidence (accuracy drops, AUC, statistical tests) that nested CV mitigates overestimation. This is practically useful for the field and aligns with calls for reproducible ML in EEG analysis; the stability of ensembles is a clear takeaway.

major comments (1)
  1. Results section: the central optimistic-bias claim rests on the ~5% SVM-RBF drop and stability of ensembles, but the manuscript must report the complete accuracy/AUC/sensitivity/specificity table for all five classifiers under both standard and nested CV protocols (including the exact inner/outer fold counts and hyperparameter search grids) to permit verification that the bias is not an artifact of a single split or incomplete tuning.
minor comments (3)
  1. Methods: the 'structured feature representation combining statistical descriptors and spectral band interactions' is described at a high level; explicit formulas or pseudocode for the spectral features and the precise list of statistical descriptors should be added for reproducibility.
  2. The ±4.25 reported with AdaBoost accuracy should be clarified as standard deviation across folds or subjects, and the number of folds in both CV schemes should be stated explicitly.
  3. Discussion: while McNemar's test is used appropriately, a brief justification for its application to paired classifier outputs on the same data splits would strengthen the statistical interpretation.

Simulated Author's Rebuttal

1 responses · 0 unresolved

We thank the referee for this constructive comment aimed at improving the verifiability of our findings. We will revise the manuscript accordingly.

read point-by-point responses
  1. Referee: Results section: the central optimistic-bias claim rests on the ~5% SVM-RBF drop and stability of ensembles, but the manuscript must report the complete accuracy/AUC/sensitivity/specificity table for all five classifiers under both standard and nested CV protocols (including the exact inner/outer fold counts and hyperparameter search grids) to permit verification that the bias is not an artifact of a single split or incomplete tuning.

    Authors: We concur that a complete table is necessary for full transparency. The manuscript currently highlights key results such as the performance drop for SVM-RBF and the stability of AdaBoost but does not present all metrics for every classifier under both protocols. In the revised version, we will include a comprehensive table with accuracy, AUC, sensitivity, and specificity for SVM-linear, SVM-RBF, random forest, kNN, and AdaBoost under standard and nested cross-validation. We will also explicitly state the inner and outer fold counts used in the nested CV protocol and provide the hyperparameter search grids for each classifier to enable independent verification. revision: yes

Circularity Check

0 steps flagged

No significant circularity identified

full rationale

The paper is a purely empirical study that compares standard versus nested cross-validation on a fixed 300-trial EEG dataset using off-the-shelf classifiers and hand-crafted statistical/spectral features. All reported accuracies, AUC values, and McNemar test results are computed directly from data splits; no equations, derivations, or fitted parameters are presented that could reduce to their own inputs by construction. No self-citations are invoked as load-bearing uniqueness theorems or ansatzes, and the central claim of optimistic bias follows immediately from the experimental protocol rather than from any circular redefinition or renaming of known results.

Axiom & Free-Parameter Ledger

1 free parameters · 1 axioms · 0 invented entities

Central claim rests on standard machine learning assumptions about data independence and the sufficiency of the given feature set; no new entities or ad-hoc parameters are introduced beyond typical hyperparameter tuning.

free parameters (1)
  • classifier hyperparameters
    Tuned globally in standard CV and nested in the stricter protocol; specific values not stated in abstract.
axioms (1)
  • domain assumption EEG trials are independent and identically distributed across subjects
    Required for cross-validation validity in subject-level classification tasks.

pith-pipeline@v0.9.0 · 5574 in / 1123 out tokens · 52501 ms · 2026-05-10T15:38:09.100228+00:00 · methodology

discussion (0)

Sign in with ORCID, Apple, or X to comment. Anyone can read and Pith papers without signing in.

Reference graph

Works this paper leans on

48 extracted references · 31 canonical work pages · 1 internal anchor

  1. [1]

    Beta power in the EEG of alcoholics,

    M. Rangaswamyet al., “Beta power in the EEG of alcoholics,” Biological Psychiatry, vol. 52, no. 8, pp. 831–842, 2004, doi: 10.1016/j.biopsych.2004.02.028

  2. [2]

    Linkage disequilibrium between the beta frequency of the human EEG and a GABAA receptor gene locus,

    B. Porjeszet al., “Linkage disequilibrium between the beta frequency of the human EEG and a GABAA receptor gene locus,”PNAS, vol. 99, no. 6, pp. 3729–3733, 2002, doi: 10.1073/pnas.052716399

  3. [3]

    Quantification of interhemispheric differ- ences in EEG for alcoholism diagnosis,

    P. Coutin-Churchmanet al., “Quantification of interhemispheric differ- ences in EEG for alcoholism diagnosis,”Clinical EEG and Neuroscience, vol. 37, no. 2, pp. 101–108, 2006, doi: 10.1177/155005940603700206

  4. [4]

    Automated diagnosis of normal and alcoholic EEG signals,

    U. R. Acharya, S. V . Sree, S. Chattopadhyay, and J. S. Suri, “Automated diagnosis of normal and alcoholic EEG signals,”International Journal of Neural Systems, vol. 24, no. 3, 2014, doi: 10.1142/S0129065714500014

  5. [5]

    Deep learning-based EEG classification for neurological disorders: A systematic review,

    Y . Zhanget al., “Deep learning-based EEG classification for neurological disorders: A systematic review,”IEEE Trans. Neural Syst. Rehabil. Eng., vol. 31, pp. 1–15, 2023

  6. [6]

    EEG- based computer-aided technique to diagnose major depressive disorder,

    W. Mumtaz, L. Xia, S. S. A. Ali, and M. A. M. Yasin, “EEG- based computer-aided technique to diagnose major depressive disorder,” Biomedical Signal Processing and Control, vol. 31, pp. 108–115, 2017, doi: 10.1016/j.bspc.2016.07.006

  7. [7]

    Analysis of gait dynamics of ALS disease and classification using artificial neural networks,

    O. Akgun, A. Akan, H. Demir, and T. C. Akinci, “Analysis of gait dynamics of ALS disease,”Tehni ˇcki vjesnik, vol. 25, no. 5, 2018, doi: 10.17559/TV-20160914144554

  8. [8]

    Time-frequency analysis of arc welding current,

    T. C. Akinci, “Time-frequency analysis of arc welding current,”Mechan- ics, vol. 85, no. 5, 2010

  9. [9]

    Spectral analysis for current and temperature measurements in power cables,

    S. Ta s ¸kın, S. S ¸eker, M. Karahan, and T. C. Akinci, “Spectral anal- ysis for current and temperature measurements in power cables,” Electric Power Components and Systems, vol. 37, no. 4, 2009, doi: 10.1080/15325000902740852

  10. [10]

    The defect detection in ceramic materials based on time- frequency analysis by using the method of impulse noise,

    T. C. Akinci, “Defect detection in ceramic materials using impulse noise,” Archives of Acoustics, vol. 36, no. 1, 2011, doi: 10.2478/v10168-011- 0007-y

  11. [11]

    Cross-subject EEG classification using transformer networks with attention optimization,

    M. Liet al., “Cross-subject EEG classification using transformer networks with attention optimization,”NeuroImage, vol. 256, 2022, doi: 10.1016/j.neuroimage.2022.119308

  12. [12]

    Robust EEG-based mental state classification using hybrid deep learning and attention mechanisms,

    S. Royet al., “Robust EEG-based mental state classification using hybrid deep learning and attention mechanisms,”NeuroImage, vol. 268, 2023, doi: 10.1016/j.neuroimage.2023.119862

  13. [13]

    Cross-subject EEG classification using domain adaptation and interpretable deep learning,

    L. Wanget al., “Cross-subject EEG classification using domain adaptation and interpretable deep learning,”Pattern Recognition, vol. 144, 2024, doi: 10.1016/j.patcog.2023.109993

  14. [14]

    Interpretable deep learning for EEG-based cognitive state assessment,

    A. Guptaet al., “Interpretable deep learning for EEG-based cognitive state assessment,”IEEE Trans. Neural Syst. Rehabil. Eng., vol. 31, pp. 1120–1132, 2023, doi: 10.1109/TNSRE.2023.3267891

  15. [15]

    Multi-domain adaptation for cross-session EEG classification,

    L. Wanget al., “Multi-domain adaptation for cross-session EEG classification,”IEEE Trans. Biomed. Eng., vol. 70, no. 5, pp. 1502– 1513, 2023, doi: 10.1109/TBME.2022.3229874

  16. [16]

    Schapire

    Y . Freund and R. E. Schapire, “A decision-theoretic generalization of on-line learning and an application to boosting,”Journal of Com- puter and System Sciences, vol. 55, no. 1, pp. 119–139, 1997, doi: 10.1006/jcss.1997.1504

  17. [17]

    Machine Learning , author =

    L. Breiman, “Random forests,”Machine Learning, vol. 45, no. 1, pp. 5– 32, 2001, doi: 10.1023/A:1010933404324. 11

  18. [18]

    Bias in error estimation when using cross- validation for model selection,

    S. Varma and R. Simon, “Bias in error estimation when using cross- validation for model selection,”BMC Bioinformatics, vol. 7, no. 91, 2006, doi: 10.1186/1471-2105-7-91

  19. [19]

    On over-fitting in model selection and subsequent selection bias in performance evaluation,

    G. C. Cawley and N. L. C. Talbot, “On over-fitting in model selection and subsequent selection bias in performance evaluation,”Journal of Machine Learning Research, vol. 11, pp. 2079–2107, 2010

  20. [20]

    Self-supervised learning for EEG representation: Improving robustness in low-data regimes,

    S. Royet al., “Self-supervised learning for EEG representation: Improving robustness in low-data regimes,”Pattern Recognition, vol. 139, 2023, doi: 10.1016/j.patcog.2023.109456

  21. [21]

    EEG-based emotion recognition using graph convolu- tional networks and domain adaptation,

    H. Zhanget al., “EEG-based emotion recognition using graph convolu- tional networks and domain adaptation,”IEEE Trans. Affective Comput., vol. 14, no. 2, pp. 845–857, 2023, doi: 10.1109/TAFFC.2021.3101234

  22. [22]

    EEG Database Data Set,

    H. Begleiter, “EEG Database Data Set,” UCI Ma- chine Learning Repository, 1999. [Online]. Available: https://archive.ics.uci.edu/dataset/121/eeg+database doi: 10.24432/C5H88K

  23. [23]

    EEG-based alcoholism detection using machine learning techniques,

    P. Rodrigues, A. Silva, and J. Madeira, “EEG-based alcoholism detection using machine learning techniques,”Biomedical Signal Processing and Control, vol. 52, pp. 1–10, 2019

  24. [24]

    Convolutional neural network-based EEG classification for alcoholism detection,

    M. Mukhtar and S. Khan, “Convolutional neural network-based EEG classification for alcoholism detection,”IEEE Access, vol. 9, pp. 115920– 115930, 2021

  25. [25]

    Explaining electroencephalogram channel and subband sensitivity for alcoholism detection,

    S. B. Sangle, P. H. Kachare, D. V . Puri, I. Al-Shoubarji, A. Jabbari, and R. Kirner, “Explaining electroencephalogram channel and subband sensitivity for alcoholism detection,”Computers in Biology and Medicine, vol. 188, p. 109826, 2025

  26. [26]

    Stacked generalization reveals optimal model complexity for regional EEG-based alcoholism classification,

    N. N. Ahmed and J. Medikonda, “Stacked generalization reveals optimal model complexity for regional EEG-based alcoholism classification,” Biomedical Signal Processing and Control, vol. 120, p. 110121, 2026

  27. [27]

    EEG signal data analysis for association with alcoholism,

    S. Patil, A. Jayappa, P. Joshi, A. Ingle, and O. Ketkar, “EEG signal data analysis for association with alcoholism,” inAdaptive Intelligence (InCITe 2024), Lecture Notes in Electrical Engineering, vol. 1280, Springer, Singapore, 2025

  28. [28]

    A Dataset Agnostic Architecture for EEG Classification: An Adaptive Windowed STFT Based Attention Network (AWS-AN),

    L. I. L., T. Tekale, B. H. Nandan, S. V . George, and S. Patil, “A Dataset Agnostic Architecture for EEG Classification: An Adaptive Windowed STFT Based Attention Network (AWS-AN),” inProc. 14th Int. Conf. Brain-Computer Interface (BCI), Gangwon Province, Korea, 2026, pp. 1– 7

  29. [29]

    Simplify- ing Depression Diagnosis: Single-Channel EEG and Deep Learning Approaches,

    S. N. Vaniya, A. Habib, M. Angelova, and C. Karmakar, “Simplify- ing Depression Diagnosis: Single-Channel EEG and Deep Learning Approaches,”IEEE Journal of Biomedical and Health Informatics, 2025

  30. [30]

    Automated human emotion recognition from EEG signals using chaotic local binary pattern and ensemble learning,

    H. Chhabra, R. Vempati, U. Chauhan,et al., “Automated human emotion recognition from EEG signals using chaotic local binary pattern and ensemble learning,”International Journal of Machine Learning and Cybernetics, vol. 17, p. 12, 2026

  31. [31]

    Stress detection using EEG signals: comparative analysis of machine learning models and feature extraction,

    G. Saini, R. Kumar, L. Malviya,et al., “Stress detection using EEG signals: comparative analysis of machine learning models and feature extraction,”Life Cycle Reliability and Safety Engineering, vol. 15, pp. 113– 130, 2026

  32. [32]

    Clinical Decision Support System for Alcoholism Detection Using the Analysis of EEG Signals,

    J. Liu, K. Narasimhan, V . Elamaran, N. Arunkumar, M. Solarte, and G. Ramirez-Gonzalez, “Clinical Decision Support System for Alcoholism Detection Using the Analysis of EEG Signals,”IEEE Access, vol. 6, pp. 61457–61461, 2018

  33. [33]

    Depression detection from three-channel resting-state EEG using a hybrid Conv1D and spectral–statistical fusion model,

    O.-I. S, tirbu, F.-C. Argatu, F.-C. Adochiei, B.-A. Enache, and G.- C. Serit,an, “Depression detection from three-channel resting-state EEG using a hybrid Conv1D and spectral–statistical fusion model,”Sensors, vol. 26, no. 5, p. 1417, 2026, doi: 10.3390/s26051417

  34. [34]

    Empirical Fourier decomposition-based alcoholism detection using biomedical signals: A neuro-scientific approach,

    M. Bhuvaneshwari and E. Grace Mary Kanaga, “Empirical Fourier decomposition-based alcoholism detection using biomedical signals: A neuro-scientific approach,”S ¯adhan¯a, vol. 51, p. 77, 2026

  35. [35]

    Machine Learning , year =

    C. Cortes and V . Vapnik, “Support-vector networks,”Machine Learning, vol. 20, no. 3, pp. 273–297, 1995, doi: 10.1007/BF00994018

  36. [36]

    Neural efficiency and attentional instability in gaming disorder: A task-based occipital EEG and machine learning study,

    R. Muhammad, E. E. Nettey-Oppong, M. Usman, S. A. K. Abro, T. A. Soomro, and A. Ali, “Neural efficiency and attentional instability in gaming disorder: A task-based occipital EEG and machine learning study,”Bioengineering, vol. 13, no. 2, p. 152, 2026, doi: 10.3390/bio- engineering13020152

  37. [37]

    A statistical and machine-learning framework for characterizing transient and aperiodic EEG signal dynamics,

    G. Singh and D. Singh, “A statistical and machine-learning framework for characterizing transient and aperiodic EEG signal dynamics,”SSRN, pp. 1–20, 2026, doi: 10.2139/ssrn.6312922

  38. [38]

    Note on the sampling error of the difference between correlated proportions,

    Q. McNemar, “Note on the sampling error of the difference between correlated proportions,”Psychometrika, vol. 12, no. 2, pp. 153–157, 1947

  39. [39]

    Cohen,Statistical Power Analysis for the Behavioral Sciences, 2nd ed

    J. Cohen,Statistical Power Analysis for the Behavioral Sciences, 2nd ed. Hillsdale, NJ: Lawrence Erlbaum Associates, 1988

  40. [40]

    An introduction to ROC analysis,

    T. Fawcett, “An introduction to ROC analysis,”Pattern Recognition Let- ters, vol. 27, no. 8, pp. 861–874, 2006, doi: 10.1016/j.patrec.2005.10.010

  41. [41]

    Teoria statistica delle classi e calcolo delle proba- bilit`a,

    C. E. Bonferroni, “Teoria statistica delle classi e calcolo delle proba- bilit`a,”Pubblicazioni del R Istituto Superiore di Scienze Economiche e Commerciali di Firenze, vol. 8, pp. 3–62, 1936

  42. [42]

    Few- shot channel selection with wavelet scattering and squeeze-and-excitation for EEG motor imagery classification,

    D.-J. Sung, J.-H. Jeong, K.-T. Kim, J.-Y . Lee, S. J. Lee, and H. Kim, “Few- shot channel selection with wavelet scattering and squeeze-and-excitation for EEG motor imagery classification,”Biomedical Signal Processing and Control, vol. 120, p. 110046, 2026, doi: 10.1016/j.bspc.2026.110046

  43. [43]

    Multivariate synchrosqueez- ing transform and time-frequency attention for mental workload classi- fication from EEG signals,

    Z. Nouri, A. Charmin, H. Kalbkhani,et al., “Multivariate synchrosqueez- ing transform and time-frequency attention for mental workload classi- fication from EEG signals,”Scientific Reports, vol. 16, p. 4948, 2026, doi: 10.1038/s41598-025-34783-w

  44. [44]

    Falsifying complexity: A non-linear interac- tion feature outperforms canonical and PAC features for EEG-based ASD classification,

    H. Ali and M. Islam, “Falsifying complexity: A non-linear interac- tion feature outperforms canonical and PAC features for EEG-based ASD classification,” inProc. 5th Int. Conf. Electrical, Computer & Telecommunication Engineering (ICECTE), pp. 1–6, 2026, doi: 10.1109/ICECTE69292.2026.11429353

  45. [45]

    Facial gestures are enacted through a cortical hierarchy of dynamic and stable codes,

    G. R. Ianni, Y . V ´azquez, A. G. Rouse, M. H. Schieber, Y . Prut, and W. A. Freiwald, “Facial gestures are enacted through a cortical hierarchy of dynamic and stable codes,”Science, vol. 391, no. 6781, 2026, doi: 10.1126/science.aea0890

  46. [46]

    invertmeeg: A benchmark and unified Python library for EEG inverse solvers,

    L. Hecker, “invertmeeg: A benchmark and unified Python library for EEG inverse solvers,”bioRxiv, 2026, doi: 10.64898/2026.03.06.710103

  47. [47]

    EEG correlates of auditory rise time processing: A systematic review,

    V . Manasevich, D. Kostanian, A. Rogachev, and O. Sysoeva, “EEG correlates of auditory rise time processing: A systematic review,”bioRxiv, 2026, doi: 10.64898/2026.03.06.710012

  48. [48]

    FEEL: Quantifying Heterogeneity in Physiological Signals for Generalizable Emotion Recognition

    P. Singh, A. Gupta, S. Jalan, M. Kumar, and P. Singh, “FEEL: Quantifying heterogeneity in physiological signals for generalizable emotion recognition,”arXiv preprint, arXiv:2604.05926, 2026, doi: 10.48550/arXiv.2604.05926