pith. machine review for the scientific record. sign in

arxiv: 2605.08495 · v1 · submitted 2026-05-08 · 💻 cs.LG · q-bio.NC

Recognition: 2 theorem links

· Lean Theorem

NeuralBench: A Unifying Framework to Benchmark NeuroAI Models

Antoine Ratouchniak, Elisa Cascardi, Hubert Banville, Jarod L\'evy, Jean-R\'emi King, J\'er\'emy Rapin, Katelyn Begany, Marl\`ene Careil, Mingfang (Lucy) Zhang, Saarang Panchavati, Simon Dahan, St\'ephane d'Ascoli, Teon Brooks, Yohann Benchetrit

Pith reviewed 2026-05-12 02:07 UTC · model grok-4.3

classification 💻 cs.LG q-bio.NC
keywords NeuroAIEEGbenchmarking frameworkfoundation modelsbrain recordingsdeep learningstandardized evaluationcognitive decoding
0
0 comments X

The pith

NeuralBench introduces a standardized framework for evaluating AI models on brain recordings, showing foundation models only marginally outperform task-specific ones.

A machine-rendered reading of the paper's core claim, the machinery that carries it, and where it could break.

The paper establishes NeuralBench as a unifying open framework to benchmark models that process brain signals, addressing inconsistent preprocessing and narrow task sets in existing work. It pairs this with an initial EEG release covering 36 tasks and 14 architectures across 94 datasets accessed uniformly. Evaluations using the framework demonstrate that large foundation models deliver only small gains over simpler task-specific approaches. A wide range of tasks, including cognitive decoding and clinical predictions, stay difficult even for the strongest models tested. The structure supports straightforward additions of new tasks, datasets, models, and modalities such as MEG or fMRI.

Core claim

NeuralBench is a unified framework for benchmarking AI models of brain activity, accompanied by NeuralBench-EEG v1.0 which evaluates 14 deep learning architectures on 36 EEG tasks across 94 datasets through a standardized interface. This shows foundation models only marginally outperform task-specific models and that many tasks remain highly challenging.

What carries the argument

NeuralBench framework with its standardized interface for tasks, datasets, models, and preprocessing pipelines, which enables consistent evaluation and extensibility to new modalities.

Load-bearing premise

That the chosen preprocessing pipelines, task definitions, and dataset selection through the standardized interface produce unbiased relative rankings of models without favoring certain architectures or data characteristics.

What would settle it

Re-evaluating the same models on the same datasets but with alternative reasonable preprocessing pipelines that reverses the observed performance ordering between foundation models and task-specific models.

read the original abstract

Deep learning and large public datasets have recently catalyzed the proliferation of AI models for processing brain recordings. However, systematically evaluating these models remains a challenge: not only do the preprocessing pipelines, training and finetuning approaches largely vary across studies, but their downstream evaluation is often limited to small sets of tasks and/or datasets. Here, we present NeuralBench: a unified framework for benchmarking AI models of brain activity. We accompany this framework with NeuralBench-EEG v1.0 -- a large EEG benchmark that includes 36 electroencephalography (EEG) tasks and 14 deep learning architectures, and is evaluated on 94 datasets accessed through a standardized interface. This first EEG-focused release already highlights two main findings. First, current foundation models only marginally outperform task-specific models. Second, a large set of tasks (e.g. cognitive decoding, clinical predictions) remain highly challenging, even for the best models. Critically, NeuralBench is designed for the integration of new tasks, datasets, models, and neuroimaging modalities, as illustrated by preliminary extensions to MEG and fMRI datasets and models. Through this white paper, we invite the community to expand this open-source framework and work together toward a unified benchmarking standard for neuroimaging models.

Editorial analysis

A structured set of objections, weighed in public.

Desk editor's note, referee report, simulated authors' rebuttal, and a circularity audit. Tearing a paper down is the easy half of reading it; the pith above is the substance, this is the friction.

Referee Report

3 major / 2 minor

Summary. The manuscript introduces NeuralBench, a unifying open-source framework for benchmarking NeuroAI models on brain recordings, with an initial EEG-focused release (NeuralBench-EEG v1.0) that standardizes access to 94 datasets, defines 36 tasks, and evaluates 14 deep learning architectures (including foundation models and task-specific ones). The central empirical claims are that foundation models only marginally outperform task-specific models and that a substantial subset of tasks (e.g., cognitive decoding, clinical predictions) remain highly challenging even for the best-performing models. The framework is presented as extensible to additional modalities such as MEG and fMRI.

Significance. If the standardization proves robust, this work offers a valuable contribution to NeuroAI by addressing the fragmentation of preprocessing pipelines, training protocols, and evaluation tasks that currently hinders systematic model comparison. The scale of the benchmark (94 datasets, 36 tasks) and its open design with community-invitation elements provide a practical foundation for reproducible progress; the explicit release of a common interface is a concrete strength that could accelerate falsifiable comparisons across architectures.

major comments (3)
  1. [§4] §4 (Experiments and Results): The headline claim that foundation models 'only marginally outperform task-specific models' is presented without statistical quantification of the deltas (e.g., no paired tests, bootstrap CIs, or multiple-comparison correction across the 36 tasks), leaving the 'marginal' qualifier unanchored and vulnerable to the possibility that observed differences fall within noise.
  2. [§3.2] §3.2 (Task and Dataset Interface): No ablation or sensitivity analysis is reported on alternative preprocessing pipelines, normalization choices, or task-definition variants; without these, it is impossible to rule out that the reported relative rankings and identification of 'highly challenging' tasks are artifacts of the single chosen standardization rather than intrinsic model properties.
  3. [§2 and §3.1] §2 and §3.1 (Benchmark Construction): The criteria used to select the 36 tasks and 94 datasets are described at a high level but lack explicit documentation of inclusion/exclusion rules, potential dataset biases (e.g., class imbalance, recording quality), or coverage of the broader EEG literature, which directly affects the generalizability of the 'remain highly challenging' conclusion.
minor comments (2)
  1. [Figures/Tables] Figure 2 and Table 1: Axis labels and legend entries are too small for readability in print; consider increasing font size and adding error bars or statistical annotations directly on the performance plots.
  2. [§5] §5 (Discussion): The extensibility claims for MEG/fMRI are illustrated only with preliminary results; a short dedicated subsection with concrete code snippets or interface examples would strengthen the invitation for community contributions.

Simulated Author's Rebuttal

3 responses · 0 unresolved

We thank the referee for the constructive and detailed feedback. We address each major comment below and describe the revisions we will make to improve the manuscript.

read point-by-point responses
  1. Referee: [§4] §4 (Experiments and Results): The headline claim that foundation models 'only marginally outperform task-specific models' is presented without statistical quantification of the deltas (e.g., no paired tests, bootstrap CIs, or multiple-comparison correction across the 36 tasks), leaving the 'marginal' qualifier unanchored and vulnerable to the possibility that observed differences fall within noise.

    Authors: We agree that statistical quantification is needed to support the 'marginal' claim. In the revised manuscript, we will add paired non-parametric tests (Wilcoxon signed-rank) comparing foundation models against task-specific models across tasks, report bootstrap confidence intervals on the performance deltas, and apply multiple-comparison correction (FDR) across the 36 tasks. These changes will provide a rigorous anchor for the observed differences. revision: yes

  2. Referee: [§3.2] §3.2 (Task and Dataset Interface): No ablation or sensitivity analysis is reported on alternative preprocessing pipelines, normalization choices, or task-definition variants; without these, it is impossible to rule out that the reported relative rankings and identification of 'highly challenging' tasks are artifacts of the single chosen standardization rather than intrinsic model properties.

    Authors: We acknowledge that sensitivity analyses would strengthen the robustness claims. A full ablation across all 94 datasets and 36 tasks is computationally prohibitive for this initial release. In revision, we will add a targeted sensitivity analysis on a representative subset of tasks for key choices (normalization and filtering), justify the selected pipeline against existing standards, and add an explicit limitations discussion noting that broader sensitivity testing is planned as future community-driven work. revision: partial

  3. Referee: [§2 and §3.1] §2 and §3.1 (Benchmark Construction): The criteria used to select the 36 tasks and 94 datasets are described at a high level but lack explicit documentation of inclusion/exclusion rules, potential dataset biases (e.g., class imbalance, recording quality), or coverage of the broader EEG literature, which directly affects the generalizability of the 'remain highly challenging' conclusion.

    Authors: We will expand Sections 2 and 3.1 with explicit inclusion/exclusion criteria (public availability, minimum subject count, task relevance to NeuroAI). We will also report dataset-level statistics on class balance and recording quality where available, and add a discussion comparing our selection to recent EEG literature surveys to clarify coverage and potential biases. revision: yes

Circularity Check

0 steps flagged

No circularity: empirical benchmark results are computed from external datasets and models

full rationale

The paper introduces a benchmarking framework and reports performance numbers obtained by executing published models on public EEG datasets through a fixed preprocessing and task interface. No equations derive predictions from first principles, no parameters are fitted to the reported deltas, and no self-citation chain is invoked to justify uniqueness or force the central claims. The two headline findings (marginal foundation-model gains and persistently hard tasks) are direct empirical outputs of the benchmark run, not algebraic rearrangements of the input data or prior author results. The standardization choices affect absolute numbers but do not create a definitional loop or fitted-input prediction; they are methodological decisions whose validity can be tested by re-running the open framework on alternative pipelines.

Axiom & Free-Parameter Ledger

0 free parameters · 1 axioms · 0 invented entities

The central claims rest on the representativeness of the selected tasks and the fairness of the standardized preprocessing and evaluation protocol; no free parameters or invented entities are introduced.

axioms (1)
  • domain assumption A single standardized preprocessing and evaluation interface produces comparable and unbiased model rankings across diverse EEG datasets and tasks.
    Invoked when claiming that the benchmark reveals true relative performance differences.

pith-pipeline@v0.9.0 · 5591 in / 1238 out tokens · 47537 ms · 2026-05-12T02:07:58.297612+00:00 · methodology

discussion (0)

Sign in with ORCID, Apple, or X to comment. Anyone can read and Pith papers without signing in.

Lean theorems connected to this paper

Citations machine-checked in the Pith Canon. Every link opens the source theorem in the public Lean library.

Reference graph

Works this paper leans on

290 extracted references · 290 canonical work pages · 4 internal anchors

  1. [1]

    Machine learning for neuroimaging with scikit-learn

    Alexandre Abraham, Fabian Pedregosa, Michael Eickenberg, Philippe Gervais, Andreas Mueller, Jean Kossaifi, Alexandre Gramfort, Bertrand Thirion, and Ga \"e l Varoquaux. Machine learning for neuroimaging with scikit-learn. Frontiers in neuroinformatics, 8: 0 14, 2014

  2. [2]

    Gaze-independent BCI-spelling using rapid serial visual presentation ( RSVP )

    Laura Acqualagna and Benjamin Blankertz. Gaze-independent BCI-spelling using rapid serial visual presentation ( RSVP ). Clinical Neurophysiology, 124 0 (5): 0 901--908, May 2013. ISSN 1388-2457. doi:10.1016/j.clinph.2012.12.050. http://dx.doi.org/10.1016/j.clinph.2012.12.050

  3. [3]

    Albrecht, James A

    Matthew A. Albrecht, James A. Waltz, James F. Cavanagh, Michael J. Frank, and James M. Gold. Increased Conflict-Induced slowing, but no differences in Conflict-Induced positive or negative prediction error learning in patients with schizophrenia. Neuropsychologia, 123: 0 131--140, February 2019. ISSN 1873-3514. doi:10.1016/j.neuropsychologia.2018.04.031

  4. [4]

    A massive 7T fMRI dataset to bridge cognitive neuroscience and artificial intelligence

    Emily J Allen, Ghislain St-Yves, Yihan Wu, Jesse L Breedlove, Jacob S Prince, Logan T Dowdle, Matthias Nau, Brad Caron, Franco Pestilli, Ian Charest, et al. A massive 7T fMRI dataset to bridge cognitive neuroscience and artificial intelligence. Nature neuroscience, 25 0 (1): 0 116--126, 2022

  5. [5]

    Physics-informed attention temporal convolutional network for eeg-based motor imagery classification

    Hamdi Altaheri, Ghulam Muhammad, and Mansour Alsulaiman. Physics-informed attention temporal convolutional network for eeg-based motor imagery classification. IEEE Transactions on Industrial Informatics, 2022. doi:10.1109/TII.2022.3197419

  6. [6]

    Influence of P300 latency jitter on event related potential-based brain--computer interface performance

    Pietro Aricò, Fabio Aloise, Francesca Schettini, Serenella Salinari, Donatella Mattia, and Febo Cincotti. Influence of P300 latency jitter on event related potential-based brain--computer interface performance. Journal of Neural Engineering, 11 0 (3): 0 035008, 2014. doi:10.1088/1741-2560/11/3/035008

  7. [7]

    Mother of all BCI benchmarks, 2025

    Bruno Aristimunha, Igor Carrara, Pierre Guetschel, Sara Sedlar, Pedro Rodrigues, Jan Sosulski, Divyesh Narayanan, Erik Bjareholt, Quentin Barthelemy, Robin Tibor Schirrmeister, Reinmar Kobler, Emmanuel Kalunga, Ludovic Darmet, Cattan Gregoire, Ali Abdul Hussain, Ramiro Gatti, Vladislav Goncharenko, Anton Andreev, Jordy Thielen, Thomas Moreau, Yannick Roy,...

  8. [8]

    Braindecode : toolbox for decoding raw electrophysiological brain data with deep learning models, 2026

    Bruno Aristimunha, Pierre Guetschel, Martin Wimpff, Lukas Gemein, Cedric Rommel, Hubert Banville, Maciej Sliwowski, Daniel Wilson, Simon Brandt, Th \'e o Gnassounou, Joseph Paillard, Aman Srivastava, Bruna Junqueira Lopes , Sara Sedlar, Thomas Moreau, Sylvain Chevallier, Alexandre Gramfort, and Robin Tibor Schirrmeister. Braindecode : toolbox for decoding...

  9. [9]

    A unified, scalable framework for neural population decoding

    Mehdi Azabou, Vinam Arora, Venkataramana Ganesh, Ximeng Mao, Santosh Nachimuthu, Michael Mendelson, Blake Richards, Matthew Perich, Guillaume Lajoie, and Eva Dyer. A unified, scalable framework for neural population decoding. Advances in Neural Information Processing Systems, 36: 0 44937--44956, 2023

  10. [10]

    wav2vec 2.0: A framework for self-supervised learning of speech representations

    Alexei Baevski, Yuhao Zhou, Abdelrahman Mohamed, and Michael Auli. wav2vec 2.0: A framework for self-supervised learning of speech representations. Advances in neural information processing systems, 33: 0 12449--12460, 2020

  11. [11]

    Magnetoencephalography for brain electrophysiology and imaging

    Sylvain Baillet. Magnetoencephalography for brain electrophysiology and imaging. Nature neuroscience, 20 0 (3): 0 327--339, 2017

  12. [13]

    Robust control of an effector by an asynchronous EEG brain-machine interface

    Alexandre Barachant. Robust control of an effector by an asynchronous EEG brain-machine interface . Theses, Universit e de Grenoble , March 2012. https://theses.hal.science/tel-01196752

  13. [14]

    A plug&play P300 BCI using information geometry

    Alexandre Barachant and Marco Congedo. A plug&play P300 BCI using information geometry. arXiv preprint arXiv:1409.0107, 2014

  14. [15]

    Multiclass brain--computer interface classification by Riemannian geometry

    Alexandre Barachant, St \'e phane Bonnet, Marco Congedo, and Christian Jutten. Multiclass brain--computer interface classification by Riemannian geometry. IEEE Transactions on Biomedical Engineering, 59 0 (4): 0 920--928, 2011

  15. [16]

    Brain decoding: toward real-time reconstruction of visual perception

    Yohann Benchetrit, Hubert Banville, and Jean-R \'e mi King. Brain decoding: toward real-time reconstruction of visual perception. In ICLR 2024, 2024

  16. [17]

    SpeechBrain-MOABB : An open-source python library for benchmarking deep neural networks applied to EEG signals

    Davide Borra, Francesco Paissan, and Mirco Ravanelli. SpeechBrain-MOABB : An open-source python library for benchmarking deep neural networks applied to EEG signals. Computers in Biology and Medicine, 182: 0 109097, 2024

  17. [18]

    Hierarchical structure guides rapid linguistic predictions during naturalistic listening

    Jonathan R Brennan and John T Hale. Hierarchical structure guides rapid linguistic predictions during naturalistic listening. PloS one, 14 0 (1): 0 e0207741, 2019

  18. [19]

    Power failure: why small sample size undermines the reliability of neuroscience

    Katherine S Button, John PA Ioannidis, Claire Mokrysz, Brian A Nosek, Jonathan Flint, Emma SJ Robinson, and Marcus R Munaf \`o . Power failure: why small sample size undermines the reliability of neuroscience. Nature reviews neuroscience, 14 0 (5): 0 365--376, 2013

  19. [20]

    Burst c-VEP based BCI : Optimizing stimulus design for enhanced classification with minimal calibration data and improved user experience

    Kalou Cabrera Castillos, Simon Ladouce, Ludovic Darmet, and Frédéric Dehais. Burst c-VEP based BCI : Optimizing stimulus design for enhanced classification with minimal calibration data and improved user experience. NeuroImage, 284: 0 120446, December 2023. ISSN 1053-8119. doi:10.1016/j.neuroimage.2023.120446. http://dx.doi.org/10.1016/j.neuroimage.2023.120446

  20. [21]

    BrainLM : A foundation model for brain activity recordings

    Josue Ortega Caro, Antonio H de O Fonseca, Christopher Averill, Syed A Rizvi, Matteo Rosati, James L Cross, Prateek Mittal, Emanuele Zappala, Daniel Levine, Rahul M Dhodapkar, et al. BrainLM : A foundation model for brain activity recordings. BioRxiv, pages 2023--09, 2023

  21. [22]

    Grégoire Cattan, Anton Andreev, Pedro L. C. Rodrigues, and Marco Congedo. Dataset of an EEG-based BCI experiment in virtual reality and on a personal computer, 2019. https://zenodo.org/record/2605204

  22. [23]

    Ricardo Chavarriaga and José del R. Millan. Learning from EEG Error-Related potentials in noninvasive Brain-Computer interfaces. IEEE Transactions on Neural Systems and Rehabilitation Engineering, 18 0 (4): 0 381--388, August 2010. ISSN 1558-0210. doi:10.1109/tnsre.2010.2053387. http://dx.doi.org/10.1109/TNSRE.2010.2053387

  23. [24]

    A large finer-grained affective computing EEG dataset

    Jingjing Chen, Xiaobin Wang, Chen Huang, Xin Hu, Xinke Shen, and Dan Zhang. A large finer-grained affective computing EEG dataset. Scientific Data, 10 0 (1), October 2023. ISSN 2052-4463. doi:10.1038/s41597-023-02650-w. http://dx.doi.org/10.1038/s41597-023-02650-w

  24. [25]

    EEG datasets for motor imagery brain computer interface

    Hohyun Cho, Minkyu Ahn, Sangtae Ahn, Moonyoung Kwon, and Chan Jun, Sung. Supporting data for "EEG datasets for motor imagery brain computer interface", 2017. http://gigadb.org/dataset/100295

  25. [27]

    EEG and MEG : relevance to neuroscience

    Fernando Lopes da Silva. EEG and MEG : relevance to neuroscience. Neuron, 80 0 (5): 0 1112--1128, 2013

  26. [28]

    BIDS CHB-MIT scalp EEG database, December 2023

    Jonathan Dan and Ali Shoeb. BIDS CHB-MIT scalp EEG database, December 2023. https://doi.org/10.5281/zenodo.10259996

  27. [29]

    A foundation model of vision, audition, and language for in-silico neuroscience

    St \'e phane d'Ascoli, J \'e r \'e my Rapin, Yohann Benchetrit, Teon Brookes, Katelyn Begany, Jos \'e phine Raugel, Hubert Banville, and Jean-R \'e mi King. A foundation model of vision, audition, and language for in-silico neuroscience. https://ai.meta.com/research/publications/a-foundation-model-of-vision-audition-and-language-for-in-silico-neuroscience/, 2026

  28. [31]

    BERT : Pre-training of deep bidirectional transformers for language understanding

    Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. BERT : Pre-training of deep bidirectional transformers for language understanding. In Proceedings of the 2019 conference of the North American chapter of the association for computational linguistics: human language technologies, volume 1 (long and short papers), pages 4171--4186, 2019

  29. [32]

    LUNA : Efficient and topology-agnostic foundation model for EEG signal analysis

    Berkay D \"o ner, Thorir Mar Ingolfsson, Luca Benini, and Yawei Li. LUNA : Efficient and topology-agnostic foundation model for EEG signal analysis. In The Thirty-Ninth Annual Conference on Neural Information Processing Systems (NeurIPS), 2025. https://openreview.net/forum?id=uazfjnFL0G

  30. [33]

    Brain- JEPA : Brain dynamics foundation model with gradient positioning and spatiotemporal masking

    Zijian Dong, Ruilin Li, Yilei Wu, Thuan T Nguyen, Joanna S Chong, Fang Ji, Nathanael R Tong, Christopher L Chen, and Juan H Zhou. Brain- JEPA : Brain dynamics foundation model with gradient positioning and spatiotemporal masking. Advances in Neural Information Processing Systems, 37: 0 86048--86073, 2024

  31. [34]

    Dornhege, B

    G. Dornhege, B. Blankertz, G. Curio, and K.-R. Muller. Boosting bit rates in noninvasive EEG single-trial classifications by feature combination and multiclass paradigms. IEEE Transactions on Biomedical Engineering, 51 0 (6): 0 993--1002, June 2004. ISSN 1558-2531. doi:10.1109/tbme.2004.827088. http://dx.doi.org/10.1109/TBME.2004.827088

  32. [35]

    A large EEG database with users’ profile information for motor imagery brain-computer interface research

    Pauline Dreyer, Aline Roc, Léa Pillette, Sébastien Rimbert, and Fabien Lotte. A large EEG database with users’ profile information for motor imagery brain-computer interface research. Scientific Data, 10 0 (1), September 2023. ISSN 2052-4463. doi:10.1038/s41597-023-02445-z. http://dx.doi.org/10.1038/s41597-023-02445-z

  33. [36]

    Towards decoding individual words from non-invasive brain recordings

    St \'e phane d’Ascoli, Corentin Bel, J \'e r \'e my Rapin, Hubert Banville, Yohann Benchetrit, Christophe Pallier, and Jean-R \'e mi King. Towards decoding individual words from non-invasive brain recordings. Nature Communications, 16 0 (1): 0 10521, 2025

  34. [38]

    REVE : A foundation model for EEG -- adapting to any setup with large-scale pretraining on 25,000 subjects

    Yassine El Ouahidi, Jonathan Lys, Philipp Th \"o lke, Nicolas Farrugia, Bastien Pasdeloup, Vincent Gripon, Karim Jerbi, and Giulia Lioi. REVE : A foundation model for EEG -- adapting to any setup with large-scale pretraining on 25,000 subjects. In The Thirty-Ninth Annual Conference on Neural Information Processing Systems (NeurIPS), 2025. https://openrevi...

  35. [39]

    PyTorch Lightning , March 2019

    William Falcon and The PyTorch Lightning team . PyTorch Lightning , March 2019. https://github.com/Lightning-AI/lightning

  36. [40]

    Autocalibration and recurrent adaptation: Towards a plug and play online ERD-BCI

    Josef Faller, Carmen Vidaurre, Teodoro Solis-Escalante, Christa Neuper, and Reinhold Scherer. Autocalibration and recurrent adaptation: Towards a plug and play online ERD-BCI . IEEE Transactions on Neural Systems and Rehabilitation Engineering, 20 0 (3): 0 313--319, May 2012. ISSN 1558-0210. doi:10.1109/tnsre.2012.2189584. http://dx.doi.org/10.1109/tnsre....

  37. [41]

    Lukas A. W. Gemein, Robin T. Schirrmeister, Patryk Chrabaszcz, Daniel Wilson, Joschka Boedecker, Andreas Schulze-Bonhage, Frank Hutter, and Tonio Ball. Machine-learning-based diagnostics of EEG pathology. NeuroImage, 220: 0 117021, 2020

  38. [42]

    You Snooze , You Win : The PhysioNet / Computing in Cardiology Challenge 2018

    Mohammad M Ghassemi, Benjamin E Moody, Li-wei H Lehman, Christopher Song, Qiao Li, Haoqi Sun, Roger G Mark, M Brandon Westover, and Gari D Clifford. You Snooze , You Win : The PhysioNet / Computing in Cardiology Challenge 2018. Computing in cardiology, 45: 0 10.22489/cinc.2018.049, September 2018. ISSN 2325-8861. doi:10.22489/cinc.2018.049

  39. [43]

    Gifford, Kshitij Dwivedi, Gemma Roig, and Radoslaw M

    Alessandro T. Gifford, Kshitij Dwivedi, Gemma Roig, and Radoslaw M. Cichy. A large and rich EEG dataset for modeling human visual object recognition. Neuroimage, 264: 0 119754, December 2022 a . ISSN 1053-8119. doi:10.1016/j.neuroimage.2022.119754. https://pmc.ncbi.nlm.nih.gov/articles/PMC9771828/

  40. [44]

    A large and rich EEG dataset for modeling human visual object recognition

    Alessandro T Gifford, Kshitij Dwivedi, Gemma Roig, and Radoslaw M Cichy. A large and rich EEG dataset for modeling human visual object recognition. NeuroImage, 264: 0 119754, 2022 b

  41. [47]

    Engemann, Daniel Strohmeier, Christian Brodbeck, Roman Goj, Mainak Jas, Teon Brooks, Lauri Parkkonen, and Matti Hämäläinen

    Alexandre Gramfort, Martin Luessi, Eric Larson, Denis A. Engemann, Daniel Strohmeier, Christian Brodbeck, Roman Goj, Mainak Jas, Teon Brooks, Lauri Parkkonen, and Matti Hämäläinen. MEG and EEG data analysis with MNE-Python . Frontiers in Neuroscience, 7, December 2013. ISSN 1662-453X. doi:10.3389/fnins.2013.00267

  42. [48]

    Grosse-Wentrup, C

    M. Grosse-Wentrup, C. Liefhold, K. Gramann, and M. Buss. Beamforming in noninvasive Brain-Computer interfaces. IEEE Transactions on Biomedical Engineering, 56 0 (4): 0 1209--1219, April 2009. ISSN 1558-2531. doi:10.1109/tbme.2008.2009768. http://dx.doi.org/10.1109/TBME.2008.2009768

  43. [49]

    How many people are able to control a P300 -based brain-computer interface ( BCI )? Neuroscience Letters, 462 0 (1): 0 94--98, September 2009

    Christoph Guger, Shahab Daban, Eric Sellers, Clemens Holzner, Gunther Krausz, Roberta Carabalona, Furio Gramatica, and Guenter Edlinger. How many people are able to control a P300 -based brain-computer interface ( BCI )? Neuroscience Letters, 462 0 (1): 0 94--98, September 2009. ISSN 0304-3940. doi:10.1016/j.neulet.2009.06.045. http://dx.doi.org/10.1016/j...

  44. [50]

    Hamid, K

    A. Hamid, K. Gagliano, S. Rahman, N. Tulin, V. Tchiong, I. Obeid, and J. Picone. The Temple University Artifact Corpus : An annotated corpus of EEG artifacts. pages 1--4, 2020

  45. [51]

    The TUH EEG Corpus: A big data resource for automated EEG interpretation

    Amir Harati, Silvia Lopez, I Obeid, J Picone, MP Jacobson, and S Tobochnik. The TUH EEG Corpus: A big data resource for automated EEG interpretation . In 2014 IEEE signal processing in medicine and biology symposium (SPMB), pages 1--5. IEEE, 2014

  46. [52]

    Improved EEG event classification using differential energy

    Amir Harati, Meysam Golmohammadi, Silvia Lopez, Iyad Obeid, and Joseph Picone. Improved EEG event classification using differential energy. pages 1--4, 2015

  47. [53]

    EEG potentials predict upcoming emergency brakings during simulated driving

    Stefan Haufe, Matthias S Treder, Manfred F Gugler, Max Sagebaum, Gabriel Curio, and Benjamin Blankertz. EEG potentials predict upcoming emergency brakings during simulated driving. Journal of Neural Engineering, 8 0 (5): 0 056001, July 2011. ISSN 1741-2552. doi:10.1088/1741-2560/8/5/056001. http://dx.doi.org/10.1088/1741-2560/8/5/056001

  48. [55]

    Hinss, Emilie S

    Marcel F. Hinss, Emilie S. Jahanpour, Bertille Somon, Lou Pluchon, Frédéric Dehais, and Raphaëlle N. Roy. COG-BCI database: A multi-session and multi-task EEG cognitive dataset for passive brain-computer interfaces, July 2022

  49. [56]

    Hinss, Emilie S

    Marcel F. Hinss, Emilie S. Jahanpour, Bertille Somon, Lou Pluchon, Frédéric Dehais, and Raphaëlle N. Roy. Open multi-session and multi-task EEG cognitive dataset for passive brain-computer interface applications. Scientific Data, 10 0 (1), February 2023. ISSN 2052-4463. doi:10.1038/s41597-022-01898-y. http://dx.doi.org/10.1038/s41597-022-01898-y

  50. [57]

    Ridge regression: Biased estimation for nonorthogonal problems

    Arthur E Hoerl and Robert W Kennard. Ridge regression: Biased estimation for nonorthogonal problems. Technometrics, 12 0 (1): 0 55--67, 1970

  51. [58]

    An efficient P300 -based brain-computer interface for disabled subjects

    Ulrich Hoffmann, Jean-Marc Vesin, Touradj Ebrahimi, and Karin Diserens. An efficient P300 -based brain-computer interface for disabled subjects. Journal of Neuroscience Methods, 167 0 (1): 0 115--125, January 2008. ISSN 0165-0270. doi:10.1016/j.jneumeth.2007.03.005. http://dx.doi.org/10.1016/j.jneumeth.2007.03.005

  52. [59]

    ZuCo, a simultaneous EEG and eye-tracking resource for natural sentence reading

    Nora Hollenstein, Jonathan Rotsztejn, Marius Troendle, Andreas Pedroni, Ce Zhang, and Nicolas Langer. ZuCo, a simultaneous EEG and eye-tracking resource for natural sentence reading . Scientific Data, 5 0 (1), December 2018. ISSN 2052-4463

  53. [60]

    LoRA : Low-rank adaptation of large language models

    Edward J Hu, Yelong Shen, Phillip Wallis, Zeyuan Allen-Zhu, Yuanzhi Li, Shean Wang, Liang Wang, Weizhu Chen, et al. LoRA : Low-rank adaptation of large language models. Iclr, 1 0 (2): 0 3, 2022

  54. [61]

    A novel 9-class auditory ERP paradigm driving a predictive text entry system

    Johannes Höhne. A novel 9-class auditory ERP paradigm driving a predictive text entry system. Frontiers in Neuroscience, 5, 2011. ISSN 1662-453X. doi:10.3389/fnins.2011.00099. http://dx.doi.org/10.3389/fnins.2011.00099

  55. [62]

    Learning From Label Proportions In Brain-Computer Interfaces

    David Hübner. EEG data for: "Learning From Label Proportions In Brain-Computer Interfaces" , 2016. https://zenodo.org/record/192684

  56. [63]

    Learning from label proportions in brain-computer interfaces: Online unsupervised learning with guarantees

    David Hübner, Thibault Verhoeven, Konstantin Schmid, Klaus-Robert Müller, Michael Tangermann, and Pieter-Jan Kindermans. Learning from label proportions in brain-computer interfaces: Online unsupervised learning with guarantees. PLOS ONE, 12 0 (4): 0 e0175856, April 2017. ISSN 1932-6203. doi:10.1371/journal.pone.0175856. http://dx.doi.org/10.1371/journal....

  57. [64]

    Ping-Keng Jao, Ricardo Chavarriaga, and José del R. Millán. EEG-Based online regulation of difficulty in simulated flying. IEEE Transactions on Affective Computing, 14 0 (1): 0 394--405, January 2023. ISSN 2371-9850. doi:10.1109/taffc.2021.3059688. http://dx.doi.org/10.1109/TAFFC.2021.3059688

  58. [65]

    MOABB : trustworthy algorithm benchmarking for bcis

    Vinay Jayaram and Alexandre Barachant. MOABB : trustworthy algorithm benchmarking for bcis. Journal of neural engineering, 15 0 (6): 0 066011, 2018

  59. [66]

    Large brain model for learning generic representations with tremendous EEG data in BCI

    Wei-Bang Jiang, Li-Ming Zhao, and Bao-Liang Lu. Large brain model for learning generic representations with tremendous EEG data in BCI . In The Twelfth International Conference on Learning Representations (ICLR), 2024

  60. [67]

    Kalunga, Sylvain Chevallier, Quentin Barthélemy, Karim Djouani, Eric Monacelli, and Yskandar Hamam

    Emmanuel K. Kalunga, Sylvain Chevallier, Quentin Barthélemy, Karim Djouani, Eric Monacelli, and Yskandar Hamam. Online SSVEP-based BCI using Riemannian geometry. Neurocomputing, 191: 0 55--68, May 2016. ISSN 0925-2312. doi:10.1016/j.neucom.2016.01.007. http://dx.doi.org/10.1016/j.neucom.2016.01.007

  61. [68]

    Kappenman, Jaclyn L

    Emily S. Kappenman, Jaclyn L. Farrens, Wendy Zhang, Andrew X. Stewart, and Steven J. Luck. ERP CORE : An open resource for human event-related potential research. NeuroImage, 225: 0 117465, January 2021. ISSN 1053-8119. doi:10.1016/j.neuroimage.2020.117465. http://dx.doi.org/10.1016/j.neuroimage.2020.117465

  62. [69]

    Few-shot algorithms for consistent neural decoding ( FALCON ) benchmark

    Brianna M Karpowicz, Joel Ye, Chaofei Fan, Pablo Tostado-Marcos, Fabio Rizzoglio, Clay Washington, Thiago Scodeler, Diogo de Lucena, Samuel R Nason-Tomaszewski, Matthew J Mender, et al. Few-shot algorithms for consistent neural decoding ( FALCON ) benchmark. Advances in Neural Information Processing Systems, 37: 0 76578--76615, 2024

  63. [70]

    EEG-Bench: A Benchmark for EEG Foundation Models in Clinical Applications

    Ard Kastrati, Josua B \"u rki, Jonas Lauer, Cheng Xuan, Raffaele Iaquinto, and Roger Wattenhofer. EEG-Bench: A Benchmark for EEG Foundation Models in Clinical Applications . In NeurIPS 2025 Workshop on Foundation Models for the Brain and Body, 2025

  64. [71]

    Kemp, A.H

    B. Kemp, A.H. Zwinderman, B. Tuk, H.A.C. Kamphuisen, and J.J.L. Oberye. Analysis of a sleep-dependent neuronal feedback loop: the slow-wave microcontinuity of the EEG . IEEE Transactions on Biomedical Engineering, 47 0 (9): 0 1185--1194, 2000

  65. [72]

    SwiFT: Swin 4D fMRI Transformer

    Peter Kim, Junbeom Kwon, Sunghwan Joo, Sangyoon Bae, Donggyu Lee, Yoonho Jung, Shinjae Yoo, Jiook Cha, and Taesup Moon. SwiFT: Swin 4D fMRI Transformer . In A. Oh, T. Naumann, A. Globerson, K. Saenko, M. Hardt, and S. Levine, editors, Advances in Neural Information Processing Systems, volume 36, pages 42015--42037. Curran Associates, Inc., 2023

  66. [73]

    Jean-R \'e mi King, Teon L. Brooks, Katie Begany, Lucy Zhang, Josephine Raugel, Jarod L \'e vy, Sophia Houhamdi, Julien Gadonneix, Linnea Evanson, Corentin Bel, St \'e phane d'Ascoli, Marl \`e ne Careil, Yohann Benchetrit, Hubert Banville, and J \'e r \'e my Rapin. Neuralset: A high-performing python package for neuro-ai. 2026

  67. [74]

    Replication data for: Four-class ASME BCI : investigation of the feasibility and comparison of two strategies for multiclassing, 2024

    Simon Kojima. Replication data for: Four-class ASME BCI : investigation of the feasibility and comparison of two strategies for multiclassing, 2024. https://dataverse.harvard.edu/citation?persistentId=doi:10.7910/DVN/1UJDV6

  68. [75]

    Replication data for: An auditory brain-computer interface based on selective attention to multiple tone streams, 2024

    Simon Kojima and Shin'ichiro Kanoh. Replication data for: An auditory brain-computer interface based on selective attention to multiple tone streams, 2024. https://dataverse.harvard.edu/citation?persistentId=doi:10.7910/DVN/MQOVEY

  69. [76]

    Brain invaders calibration-less P300 -based BCI with modulation of flash duration dataset (bi2015a), 2019 a

    Louis Korczowski, Martine Cederhout, Anton Andreev, Grégoire Cattan, Pedro Luis Coelho Rodrigues, Violette Gautheret, and Marco Congedo. Brain invaders calibration-less P300 -based BCI with modulation of flash duration dataset (bi2015a), 2019 a . https://zenodo.org/record/3266930

  70. [77]

    Brain invaders calibration-less P300 -based BCI using dry EEG electrodes dataset (bi2014a), 2019 b

    Louis Korczowski, Ekaterina Ostaschenko, Anton Andreev, Grégoire Cattan, Pedro Luis Coelho Rodrigues, Violette Gautheret, and Marco Congedo. Brain invaders calibration-less P300 -based BCI using dry EEG electrodes dataset (bi2014a), 2019 b . https://zenodo.org/record/3266223

  71. [78]

    Brain invaders solo versus collaboration: Multi-User P300 -based Brain-Computer interface dataset (bi2014b), 2019 c

    Louis Korczowski, Ekaterina Ostaschenko, Anton Andreev, Grégoire Cattan, Pedro Luis Coelho Rodrigues, Violette Gautheret, and Marco Congedo. Brain invaders solo versus collaboration: Multi-User P300 -based Brain-Computer interface dataset (bi2014b), 2019 c . https://zenodo.org/record/3267301

  72. [79]

    BENDR : Using transformers and a contrastive self-supervised learning task to learn from massive amounts of EEG data

    Demetres Kostas, Stephane Aroca-Ouellette, and Frank Rudzicz. BENDR : Using transformers and a contrastive self-supervised learning task to learn from massive amounts of EEG data. Frontiers in Human Neuroscience, 15: 0 653659, 2021

  73. [80]

    EEG and EMG dataset for the detection of errors introduced by an active orthosis device

    Niklas Kueper, Kartik Chari, Judith Bütefür, Julia Habenicht, Tobias Rossol, Su Kyoung Kim, Marc Tabie, Frank Kirchner, and Elsa Andrea Kirchner. EEG and EMG dataset for the detection of errors introduced by an active orthosis device. Frontiers in Human Neuroscience, 18, January 2024. ISSN 1662-5161. doi:10.3389/fnhum.2024.1304311

  74. [81]

    Lawhern, Amelia J

    Vernon J. Lawhern, Amelia J. Solon, Nicholas R. Waytowich, Stephen M. Gordon, Chou P. Hung, and Brent J. Lance. EEGNet : a compact convolutional neural network for EEG -based brain--computer interfaces. Journal of Neural Engineering, 15 0 (5): 0 056013, 2018

  75. [82]

    EEG dataset and OpenBMI toolbox for three BCI paradigms: an investigation into BCI illiteracy

    Min-Ho Lee, O-Yeon Kwon, Yong-Jeong Kim, Hong-Kyung Kim, Young-Eun Lee, John Williamson, Siamac Fazli, and Seong-Whan Lee. EEG dataset and OpenBMI toolbox for three BCI paradigms: an investigation into BCI illiteracy. GigaScience, 8 0 (5), January 2019. ISSN 2047-217X. doi:10.1093/gigascience/giz002. http://dx.doi.org/10.1093/gigascience/giz002

  76. [83]

    Brain-Computer communication: Motivation, aim, and impact of exploring a virtual apartment

    Robert Leeb, Felix Lee, Claudia Keinrath, Reinhold Scherer, Horst Bischof, and Gert Pfurtscheller. Brain-Computer communication: Motivation, aim, and impact of exploring a virtual apartment. IEEE Transactions on Neural Systems and Rehabilitation Engineering, 15 0 (4): 0 473--482, December 2007. ISSN 1558-0210. doi:10.1109/tnsre.2007.906956. http://dx.doi....

  77. [84]

    The power of scale for parameter-efficient prompt tuning

    Brian Lester, Rami Al-Rfou, and Noah Constant. The power of scale for parameter-efficient prompt tuning. In Proceedings of the 2021 conference on empirical methods in natural language processing, pages 3045--3059, 2021

  78. [85]

    Brain-to-text decoding: A non-invasive approach via typing

    Jarod L \'e vy, Mingfang Zhang, Svetlana Pinet, J \'e r \'e my Rapin, Hubert Banville, St \'e phane d'Ascoli, and Jean-R \'e mi King. Brain-to-text decoding: A non-invasive approach via typing. arXiv preprint arXiv:2502.17480, 2025

  79. [86]

    An EEG motor imagery dataset for brain computer interface in acute stroke patients

    Haijie Liu, Penghu Wei, Haochong Wang, Xiaodong Lv, Wei Duan, Meijie Li, Yan Zhao, Qingmei Wang, Xinyuan Chen, Gaige Shi, Bo Han, and Junwei Hao. An EEG motor imagery dataset for brain computer interface in acute stroke patients. Scientific Data, 11 0 (1), January 2024 a . ISSN 2052-4463. doi:10.1038/s41597-023-02787-8. http://dx.doi.org/10.1038/s41597-02...

  80. [87]

    EEG2video: Towards decoding dynamic visual perception from EEG signals

    Xuan-Hao Liu, Yan-Kai Liu, Yansen Wang, Kan Ren, Hanwen Shi, Zilong Wang, Dongsheng Li, Bao-Liang Lu, and Wei-Long Zheng. EEG2video: Towards decoding dynamic visual perception from EEG signals . Advances in Neural Information Processing Systems, 37: 0 72245--72273, 2024 b

Showing first 80 references.