pith. machine review for the scientific record. sign in

arxiv: 2605.13315 · v1 · submitted 2026-05-13 · 💻 cs.ET · cs.LG· cs.NE· cs.SY· eess.SY· q-bio.NC

Recognition: 2 theorem links

· Lean Theorem

Embodied Neurocomputation: A Framework for Interfacing Biological Neural Cultures with Scaled Task-Driven Validation

Authors on Pith no claims yet

Pith reviewed 2026-05-14 18:49 UTC · model grok-4.3

classification 💻 cs.ET cs.LGcs.NEcs.SYeess.SYq-bio.NC
keywords biological neural networksneurocomputationparameter optimizationhybrid systemsnavigation tasktask-driven validationDQN comparisonembodied computing
0
0 comments X

The pith

Biological neural networks outperform optimized silicon agents in navigation tasks through optimized interfacing parameters.

A machine-rendered reading of the paper's core claim, the machinery that carries it, and where it could break.

The paper introduces an Embodied Neurocomputation framework to solve the challenge of linking biological neural cultures to silicon systems for computation. It demonstrates this by conducting a large-scale optimization of encoding and decoding parameters for a BNN performing navigation in a simulated grid world following an odor-style gradient. Over 1,300 combinations were tested in more than 4,000 hours of interactions, identifying 12 setups that enable consistent learning. These configurations produced higher task performance than silicon-based DQN agents under the same interaction limits. This work provides a foundation for scalable, task-driven use of biological substrates in hybrid computing systems.

Core claim

The authors show that their Embodied Neurocomputation framework, applied to optimizing parameters for a biological neural network agent in a closed-loop odor-gradient navigation task, identifies configurations that achieve significantly superior performance compared to optimized deep Q-network agents given identical interaction budgets, after evaluating thousands of parameter sets across extensive real-time simulations.

What carries the argument

The Embodied Neurocomputation framework, a systems-level method for optimizing encoding and decoding interfaces between biological neural cultures and silicon hardware through task-driven parameter searches in simulated environments.

If this is right

  • BNN agents can exhibit reliable learning in navigation tasks when encoding parameters are properly selected.
  • The framework enables systematic comparison and benchmarking of biological versus silicon computing performance.
  • Hybrid bio-silicon systems become feasible for adaptive, goal-oriented tasks like robotic navigation.
  • Task-driven validation scales to identify effective configurations despite large search spaces.

Where Pith is reading between the lines

These are editorial extensions of the paper, not claims the author makes directly.

  • If the simulation accurately models real BNN dynamics, these optimized configurations could be transferred to physical experiments for validation.
  • Similar optimization approaches might reveal advantages for BNNs in other domains requiring energy efficiency or adaptability.
  • Extending the framework to physical robot control could demonstrate real-world utility of living neural cultures for computation.

Load-bearing premise

The simulated grid-world with odor-style gradient captures the essential learning and interaction dynamics of real biological neural cultures.

What would settle it

Testing the top-performing configurations on actual biological neural cultures in a physical navigation setup and finding they do not outperform DQN agents under equivalent conditions.

Figures

Figures reproduced from arXiv: 2605.13315 by Alon Loeffler, Azin Azadi, Bernhard Sendhoff, Bradley Watmuff, Brett J. Kagan, Candice Desouza, Daniel Tanneberg, Daria Kornienko, Finn Doensen, Forough Habibollahi, Johnson Zhou, Justin Leigh Bourke, Kiaran Lawson, Kwaku Dad Abu-Bonsrah, Valentina Baccetti.

Figure 1
Figure 1. Figure 1: Schematic of the proposed Embodied Neurocomputation Framework defined in Section 2. A: Preparation and characterization of the biological neural substrate. (Top) Timeline of neuronal differentiation from induced Pluripotent Stem Cells (iPSCs) via Neural Stem Cells (NSCs) to mature neurons at 90 days in vitro (DIV) (Bottom) Immunocytochemical characterization of neuronal identity. Micrographs show distinct … view at source ↗
Figure 2
Figure 2. Figure 2: Experimental setup for framework evaluation. A: Simulated goal-driven environment featuring an agent, food source, and odor gradient within a 6×6 gridworld. B: Distributed optimization architecture using an Optuna server to dispatch parameters to parallel clients for aggregate trial scoring. C: Two-stage parameter screening pipeline: Stage 1 reduces 1,296 initial combinations to a shortlist (n = 64), and S… view at source ↗
Figure 3
Figure 3. Figure 3: A: SHAP contributions to Stage 1 top 1% performance (see Section 4.1). B: XGBoost feature importance in Stage 1. C: Top 1% parameter distributions across Stages 1–2. interaction period, with the same regimen used across all experiments. While stimulation amplitude and pulse width are inherited from the encoding configuration, feedback procedure remains fixed. Biological. For Stage 1, we screen identical pa… view at source ↗
Figure 4
Figure 4. Figure 4: Performance of BNN, DQN, and baseline agents in (left) Group 3 150 steps × 1 episode and (right) Group 4 30 steps × 5 episodes. Bars represent mean ± 95% CI. Signifi￾cance determined via Brunner–Munzel tests, where ****: p < 10−4 . 1 2 3 4 5 Episode −0.2 −0.1 0.0 0.1 0.2 0.3 0.4 0.5 N orm aliz e d M e a n E pis o d e R e w ard Param Set best Param Set top-1% Param Set top-5% Param Set top-10% Param Set bot… view at source ↗
Figure 5
Figure 5. Figure 5: A: Group 4 learning progression, showing task performance (mean ± 95% CI) across five episodes for various parameter sets and baselines, with examples of agent traces for episodes 1 and 5 (left and right respectively). B: Proportion of decoded actions associated with reinforcing feedback (mean ± 95% CI), for the top and bottom 1% parameter sets in (A). C: Heat map showing average of relative spike counts a… view at source ↗
Figure 6
Figure 6. Figure 6: Spatial layouts used for encoding and decoding on the MEA. Stage 1 Layout (left): A configuration placing encoding and decoding regions on opposite sides of the chip, which we find to be limiting due to spatial heterogeneity in neuronal coverage. Stage 2 Layout (right): An improved design positioning the encoding region centrally and equidistant from spatially separated decoding regions to mitigate spatial… view at source ↗
Figure 7
Figure 7. Figure 7: Raster plot showing four examples of rate encoding on the encoding channels being delivered at 4 Hz, 40 Hz, 60 Hz, and 80 Hz for 1 second each, with stimulation pulses marked in red. 15 [PITH_FULL_IMAGE:figures/full_fig_p015_7.png] view at source ↗
Figure 8
Figure 8. Figure 8: Schematic of temporal unfolding of environment state encoding and decoding (above) with an example raster plot marking stimulation pulses in red that correspond to two environment interactions: 1) 4 Hz encoding followed by random feedback, and 2) 40 Hz encoding followed by structured feedback. A.6 Reward and Sensing Dynamics The reward function Rt at timestep t (see [PITH_FULL_IMAGE:figures/full_fig_p016_8.png] view at source ↗
read the original abstract

Biological neural networks (BNNs) have been established as a powerful and adaptive substrate that offer the potential for incredibly energy and data efficient information processing with distinct learning mechanisms. Yet a core challenge to utilizing BNN for neurocomputation is determining the optimal encoding and decoding mechanisms between the traditional silicon computing interface and the living biology. Here, we propose an Embodied Neurocomputation framework as a systems-level approach to this multi-variable optimization encoding/decoding problem. We operationalize this approach through the first large-scale parameter optimization of encoding configurations for a BNN agent performing closed-loop navigation along an odor-style gradient in a simulated grid-world. Despite the relative simplicity of the task, the biological interactions gave rise to a massive multi-combinatorial search space for optimal parameters. By considering how the components of the system are interconnected and parameterized, we evaluated approximately 1,300 parameter combinations, over 4,000 hours of real-time agent-environment interactions, to identify 12 configurations that consistently demonstrated learning across multiple episodes. These configurations achieved significantly higher task performances than optimized silicon-based DQN agents under the same interaction budget. These findings represent an initial step toward robust and scalable goal-oriented learning using BNNs. Our framework establishes a foundation for applying task-driven neurocomputing and supports the development of field-wide benchmarks. In the long term, this work supports the development of hybrid bio-silicon architectures capable of efficient, adaptive and real-time computation, including the potential for robotic control applications.

Editorial analysis

A structured set of objections, weighed in public.

Desk editor's note, referee report, simulated authors' rebuttal, and a circularity audit. Tearing a paper down is the easy half of reading it; the pith above is the substance, this is the friction.

Referee Report

2 major / 1 minor

Summary. The manuscript proposes an Embodied Neurocomputation framework for optimizing encoding and decoding interfaces between biological neural networks (BNNs) and silicon systems. It reports a combinatorial search over approximately 1,300 encoding configurations for a BNN agent performing closed-loop navigation along an odor-style gradient in a simulated grid-world, accumulating over 4,000 hours of interactions. From this search the authors identify 12 configurations that exhibit consistent learning across episodes and claim these achieve significantly higher task performance than optimized DQN agents under the same interaction budget. The work positions the framework as an initial step toward scalable, task-driven validation and benchmarks for hybrid bio-silicon neurocomputation.

Significance. If the performance claims are supported by complete methodological reporting and statistical analysis, the scale of the parameter exploration would represent a useful contribution to the development of systematic approaches for BNN interfacing. The identification of multiple successful configurations and the direct comparison to a silicon baseline could help establish benchmarks for goal-oriented learning with biological substrates, supporting longer-term goals of efficient hybrid architectures for adaptive computation.

major comments (2)
  1. [Abstract] Abstract: The central claim that the 12 BNN configurations 'achieved significantly higher task performances than optimized silicon-based DQN agents under the same interaction budget' is load-bearing but unsupported by any description of the DQN optimization procedure. No hyperparameter grid (architecture depth/width, learning rates, replay buffer size, exploration schedule), total training steps or episodes allocated to the baseline, or confirmation of equivalent combinatorial search effort is provided. Without these details the outperformance cannot be distinguished from possible under-optimization of the DQN agents.
  2. [Abstract] Abstract and Results sections: The abstract states results from 1,300 combinations and 4,000 hours but supplies no information on the exact parameter space explored, how the 1,300 combinations were generated or sampled, exclusion criteria for the 12 successful configurations, error bars on performance metrics, or any statistical tests (e.g., t-tests or ANOVA) supporting the 'significantly higher' claim. These omissions prevent verification of the reported findings.
minor comments (1)
  1. [Abstract] Abstract: The phrase 'real-time agent-environment interactions' is used for what are described as simulated experiments; clarify whether this refers to wall-clock time of the simulation or simulated time steps.

Simulated Author's Rebuttal

2 responses · 0 unresolved

We thank the referee for the constructive feedback. We address each major comment below and have revised the manuscript to improve transparency and verifiability of the claims.

read point-by-point responses
  1. Referee: [Abstract] Abstract: The central claim that the 12 BNN configurations 'achieved significantly higher task performances than optimized silicon-based DQN agents under the same interaction budget' is load-bearing but unsupported by any description of the DQN optimization procedure. No hyperparameter grid (architecture depth/width, learning rates, replay buffer size, exploration schedule), total training steps or episodes allocated to the baseline, or confirmation of equivalent combinatorial search effort is provided. Without these details the outperformance cannot be distinguished from possible under-optimization of the DQN agents.

    Authors: We agree that the abstract does not include these details and that this weakens the central claim as presented. In the revised manuscript we have added a concise summary to the abstract describing the DQN baseline: a grid search over network depths (2-4 layers), widths (64-512 units), learning rates (1e-4 to 1e-2), replay buffer sizes (10k-100k transitions), and epsilon-greedy schedules, with total training episodes and steps matched exactly to the BNN interaction budget and equivalent combinatorial effort applied to the baseline. revision: yes

  2. Referee: [Abstract] Abstract and Results sections: The abstract states results from 1,300 combinations and 4,000 hours but supplies no information on the exact parameter space explored, how the 1,300 combinations were generated or sampled, exclusion criteria for the 12 successful configurations, error bars on performance metrics, or any statistical tests (e.g., t-tests or ANOVA) supporting the 'significantly higher' claim. These omissions prevent verification of the reported findings.

    Authors: We acknowledge these omissions in the abstract and results presentation. The revised manuscript now includes: (i) explicit enumeration of the parameter space (spike encoding rates, decoding window lengths, stimulation amplitudes, and interface mappings, yielding the ~1,300 valid combinations via systematic enumeration within biologically feasible bounds); (ii) sampling method (full combinatorial enumeration, not random sampling); (iii) exclusion criteria (no consistent performance gain across at least three episodes); (iv) error bars (standard error of the mean); and (v) statistical support (paired t-tests with p < 0.01 for the 12 configurations versus DQN). These additions have been incorporated into both the abstract and Results sections. revision: yes

Circularity Check

0 steps flagged

No circularity: empirical results rest on independent experimental evaluation

full rationale

The paper reports an experimental optimization over ~1,300 encoding/decoding parameter combinations for BNN agents in a grid-world navigation task, selecting 12 configurations that showed learning and outperformed DQN baselines under matched interaction budgets. No equations, derivations, or first-principles predictions are presented that reduce to fitted inputs by construction. No self-citations are invoked to establish uniqueness theorems, ansatzes, or load-bearing premises for the central claims. The DQN comparison is an external benchmark whose optimization details may warrant scrutiny for fairness, but this does not create internal circularity. The work is self-contained against external benchmarks and contains no self-definitional, renaming, or fitted-prediction reductions.

Axiom & Free-Parameter Ledger

1 free parameters · 1 axioms · 0 invented entities

The central claim rests on the domain assumption that BNNs provide distinct efficient learning mechanisms and that simulation results can guide real interfacing; no free parameters are explicitly named but the 1300-combination search implies many encoding hyperparameters.

free parameters (1)
  • encoding/decoding configuration parameters
    Approximately 1300 combinations evaluated to identify the 12 successful setups; specific values and selection rules not detailed.
axioms (1)
  • domain assumption Biological neural networks have been established as a powerful and adaptive substrate that offer the potential for incredibly energy and data efficient information processing with distinct learning mechanisms.
    Invoked in the opening sentence of the abstract as background for the framework.

pith-pipeline@v0.9.0 · 5658 in / 1227 out tokens · 33991 ms · 2026-05-14T18:49:56.828636+00:00 · methodology

discussion (0)

Sign in with ORCID, Apple, or X to comment. Anyone can read and Pith papers without signing in.

Lean theorems connected to this paper

Citations machine-checked in the Pith Canon. Every link opens the source theorem in the public Lean library.

What do these tags mean?
matches
The paper's claim is directly supported by a theorem in the formal canon.
supports
The theorem supports part of the paper's argument, but the paper may add assumptions or extra steps.
extends
The paper goes beyond the formal theorem; the theorem is a base layer rather than the whole result.
uses
The paper appears to rely on the theorem as machinery.
contradicts
The paper's claim conflicts with a theorem or certificate in the canon.
unclear
Pith found a possible connection, but the passage is too broad, indirect, or ambiguous to say the theorem truly supports the claim.

Reference graph

Works this paper leans on

46 extracted references

  1. [1]

    Energy and Policy Considerations for Deep Learning in NLP.arXiv, 2019

    Emma Strubell, Ananya Ganesh, and Andrew McCallum. Energy and Policy Considerations for Deep Learning in NLP.arXiv, 2019

  2. [2]

    Asen Mehonic and Anthony J. Kenyon. Brain-inspired computing needs a master plan.Nature, 2022

  3. [3]

    The economy of brain network organization.Nature Reviews Neuroscience, 2012

    Ed Bullmore and Olaf Sporns. The economy of brain network organization.Nature Reviews Neuroscience, 2012

  4. [4]

    Wagenaar, Radhika Madhavan, Jerome Pine, and Steve M

    Daniel A. Wagenaar, Radhika Madhavan, Jerome Pine, and Steve M. Potter. Controlling Bursting in Cortical Cultures with Closed-Loop Multi-Electrode Stimulation.The Journal of Neuroscience, 2005

  5. [5]

    Pastore, and Paolo Massobrio

    Daniele Poli, Vito P. Pastore, and Paolo Massobrio. Functional connectivity in in vitro neuronal assemblies.Frontiers in Neural Circuits, 2015

  6. [6]

    Dimensionality reduction for large-scale neural record- ings.Nature neuroscience, 2014

    John P Cunningham and Byron M Yu. Dimensionality reduction for large-scale neural record- ings.Nature neuroscience, 2014

  7. [7]

    Neural manifolds for the control of movement.Neuron, 2017

    Juan A Gallego, Matthew G Perich, Lee E Miller, and Sara A Solla. Neural manifolds for the control of movement.Neuron, 2017

  8. [8]

    Learning in Networks of Cortical Neurons.The Journal of Neuroscience, 2001

    Goded Shahaf and Shimon Marom. Learning in Networks of Cortical Neurons.The Journal of Neuroscience, 2001

  9. [9]

    Breakdown of Modularity in Complex Networks.Frontiers in Physiology, 2017

    Sergi Valverde. Breakdown of Modularity in Complex Networks.Frontiers in Physiology, 2017

  10. [10]

    Satoshi Moriya, Hideaki Yamamoto, Ayumi Hirano-Iwata, Shigeru Kubota, and Shigeo Sato. Quantitative Analysis of Dynamical Complexity in Cultured Neuronal Network Models for Reservoir Computing Applications.International Joint Conference on Neural Networks (IJCNN), 2019

  11. [11]

    Johnson, Brett J

    Dowlette-Mary Alam El Din, Leah Moenkemoeller, Alon Loeffler, Forough Habibollahi, Jack Schenkman, Amitav Mitra, Tjitse Van Der Molen, Lixuan Ding, Jason Laird, Maren Schenke, Erik C. Johnson, Brett J. Kagan, Thomas Hartung, and Lena Smirnova. Human neural organoid microphysiological systems show the building blocks necessary for basic learning and memory...

  12. [12]

    Kagan, Daniela Duc, Ian Stevens, and Frederic Gilbert

    Brett J. Kagan, Daniela Duc, Ian Stevens, and Frederic Gilbert. Neurons Embodied in a Virtual World: Evidence for Organoid Ethics?AJOB Neuroscience, 2022

  13. [13]

    Benedikt Maurer, Vaiva Vasiliauskait˙e, Julian Hengsteler, Gino Cathomen, Tobias Ruff, Cedric Schmid, János Vörös, and Stephan J. Ihle. Reinforcement learning for closed-loop optimisation of spatiotemporal stimulation in patterned neuronal networks.bioRxiv, 2026

  14. [14]

    Schweiger, Kwaku Dad Abu-Bonsrah, Brad Watmuff, Azin Azadi, Sergey Pryshchep, Karthikeyan Narayanan, Christopher Puleo, Kannathal Natarajan, Mohammed A

    Md Sayed Tanveer, Dhruvik Patel, Hunter E. Schweiger, Kwaku Dad Abu-Bonsrah, Brad Watmuff, Azin Azadi, Sergey Pryshchep, Karthikeyan Narayanan, Christopher Puleo, Kannathal Natarajan, Mohammed A. Mostajo-Radji, Brett J. Kagan, and Ge Wang. Starting a synthetic biological intelligence lab from scratch.Patterns, 2025

  15. [15]

    Arndt, Martin Vingron, and Yechiel Elkabetz

    Daniel Rosebrock, Sneha Arora, Naresh Mutukula, Rotem V olkman, Elzbieta Gralinska, Anasta- sios Balaskas, Amèlia Aragonés Hernández, René Buschow, Björn Brändl, Franz-Josef Müller, Peter F. Arndt, Martin Vingron, and Yechiel Elkabetz. Enhanced cortical neural stem cell identity through short SMAD and WNT inhibition in human cerebral organoids facilitates...

  16. [16]

    Kwaku Dad Abu-Bonsrah, Candice Desouza, Forough Habibollahi, Hui Wen Chan, Brad Watmuff, Mirella Dottori, and Brett J. Kagan. A novel protocol for the efficient generation of all three major hippocampal neuronal sub-populations from human pluripotent stem cells.bioRxiv, 2026. 11

  17. [17]

    Brett J. Kagan. Two roads diverged: Pathways toward harnessing intelligence in neural cell cultures.Cell Biomaterials, 2025

  18. [18]

    Brett J. Kagan. The CL1 as a platform technology to leverage biological neural system functions. Nature Reviews Bioengineering, 2025

  19. [19]

    Neural Computers.arXiv, 2026

    Mingchen Zhuge, Changsheng Zhao, Haozhe Liu, Zijian Zhou, Shuming Liu, Wenyi Wang, Ernie Chang, Gael Le Lan, Junjie Fei, Wenxuan Zhang, Yasheng Sun, Zhipeng Cai, Zechun Liu, Yunyang Xiong, Yining Yang, Yuandong Tian, Yangyang Shi, Vikas Chandra, and Jürgen Schmidhuber. Neural Computers.arXiv, 2026

  20. [20]

    Evolutionary training and abstraction yields algorithmic generalization of neural computers.Nature Machine Intelligence, 2020

    Daniel Tanneberg, Elmar Rueckert, and Jan Peters. Evolutionary training and abstraction yields algorithmic generalization of neural computers.Nature Machine Intelligence, 2020

  21. [21]

    Brain organoid reservoir computing for artificial intelligence

    Hongwei Cai, Zheng Ao, Chunhui Tian, Zhuhao Wu, Hongcheng Liu, Jason Tchieu, Mingxia Gu, Ken Mackie, and Feng Guo. Brain organoid reservoir computing for artificial intelligence. Nature Electronics, 2023

  22. [22]

    Biological neurons act as generalization filters in reservoir computing.Proceedings of the National Academy of Sciences, 2023

    Takuma Sumi, Hideaki Yamamoto, Yuichi Katori, Koki Ito, Satoshi Moriya, Tomohiro Konno, Shigeo Sato, and Ayumi Hirano-Iwata. Biological neurons act as generalization filters in reservoir computing.Proceedings of the National Academy of Sciences, 2023

  23. [23]

    Schweiger, Sebastian Hernandez, Alex Spaeth, Kateryna V oitiuk, David F

    Ash Robbins, Hunter E. Schweiger, Sebastian Hernandez, Alex Spaeth, Kateryna V oitiuk, David F. Parks, Tjitse Van Der Molen, Jinghui Geng, Isabel Cline, Kenneth S. Kosik, Sofie R. Salama, Tal Sharf, Mohammed A. Mostajo-Radji, David Haussler, and Mircea Teodorescu. Goal-directed learning in cortical organoids.Cell Reports, 2026

  24. [24]

    Moein Khajehnejad, Forough Habibollahi, Alon Loeffler, Aswin Paul, Adeel Razi, and Brett J. Kagan. Dynamic Network Plasticity and Sample Efficiency in Neural Cultures: A Comparison with Deep Learning.Cyborg and Bionic Systems, 2025

  25. [25]

    Biological neurons vs deep reinforcement learning: Sample efficiency in a simulated game-world.NeurIPS Workshop: Memory in Artificial and Real Intelligence (MemARI) Workshop, 2022

    Forough Habibollahi, Amitesh Gaurav, Moein Khajehnejad, and Brett J Kagan. Biological neurons vs deep reinforcement learning: Sample efficiency in a simulated game-world.NeurIPS Workshop: Memory in Artificial and Real Intelligence (MemARI) Workshop, 2022

  26. [26]

    Graph-based representation learning of neuronal dynamics and behavior.arXiv, 2024

    Moein Khajehnejad, Forough Habibollahi, Ahmad Khajehnejad, Chris French, Brett J Kagan, and Adeel Razi. Graph-based representation learning of neuronal dynamics and behavior.arXiv, 2024

  27. [27]

    Human neural organoid microphysiological systems show the building blocks necessary for basic learning and memory.Communications Biology, 2025

    Dowlette-Mary Alam El Din, Leah Moenkemoeller, Alon Loeffler, Forough Habibollahi, Jack Schenkman, Amitav Mitra, Tjitse Van Der Molen, Lixuan Ding, Jason Laird, Maren Schenke, et al. Human neural organoid microphysiological systems show the building blocks necessary for basic learning and memory.Communications Biology, 2025

  28. [28]

    Microelectrode arrays cultured with in vitro neural networks for motion control tasks: Encoding and decoding progress and advances.Microsystems & Nanoengineering, 2025

    Sihan Hua, Yaoyao Liu, Jinping Luo, Shangchen Li, Longhui Jiang, Pei Wu, Shutong Sun, Li Shang, Chengji Lu, Kui Zhang, Juntao Liu, Mixia Wang, Huaizhang Shi, and Xinxia Cai. Microelectrode arrays cultured with in vitro neural networks for motion control tasks: Encoding and decoding progress and advances.Microsystems & Nanoengineering, 2025

  29. [29]

    Quian Quiroga, Z

    R. Quian Quiroga, Z. Nadasdy, and Y . Ben-Shaul. Unsupervised Spike Detection and Sorting with Wavelets and Superparamagnetic Clustering.Neural Computation, 2004

  30. [30]

    Kilosort: realtime spike-sorting for extracellular electrophysiology with hundreds of channels.BioRxiv, 2016

    Marius Pachitariu, Nicholas Steinmetz, Shabnam Kadir, Matteo Carandini, and Harris Ken- neth D. Kilosort: realtime spike-sorting for extracellular electrophysiology with hundreds of channels.BioRxiv, 2016

  31. [31]

    A fully automated approach to spike sorting.Neuron, 2017

    Jason E Chung, Jeremy F Magland, Alex H Barnett, Vanessa M Tolosa, Angela C Tooker, Kye Y Lee, Kedar G Shah, Sarah H Felix, Loren M Frank, and Leslie F Greengard. A fully automated approach to spike sorting.Neuron, 2017

  32. [32]

    Automated spike sorting using density grid contour clustering and subtractive waveform decomposition.Journal of neuroscience methods, 2007

    Carlos Vargas-Irwin and John P Donoghue. Automated spike sorting using density grid contour clustering and subtractive waveform decomposition.Journal of neuroscience methods, 2007. 12

  33. [33]

    Neuro-Inspired Visual Pattern Recognition via Biological Reservoir Computing.arXiv, 2026

    Luca Ciampi, Ludovico Iannello, Fabrizio Tonelli, Gabriele Lagani, Angelo Di Garbo, Federico Cremisi, and Giuseppe Amato. Neuro-Inspired Visual Pattern Recognition via Biological Reservoir Computing.arXiv, 2026

  34. [34]

    Kagan, Valentina Baccetti, Brian D

    Brett J. Kagan, Valentina Baccetti, Brian D. Earp, J. Lomax Boyd, Julian Savulescu, and Adeel Razi. A quantifiable information-processing hierarchy provides a necessary condition for detecting agency.arXiv, 2026

  35. [35]

    Jordan, Martin Kutter, Jean-Marc Comby, Flora Brozzi, and Ewelina Kurtys

    Fred D. Jordan, Martin Kutter, Jean-Marc Comby, Flora Brozzi, and Ewelina Kurtys. Open and remotely accessible Neuroplatform for research in wetware computing.Frontiers in Artificial Intelligence, 2024

  36. [36]

    Brett J. Kagan, Forough Habibollahi, Brad Watmuff, Azin Azadi, Finn Doensen, Alon Loeffler, Seung Hoon Byun, Bram Servais, Candice Desouza, Kwaku Dad Abu-Bonsrah, and Nicole Kerlero de Rosbo. Harnessing Intelligence from Brain Cells In Vitro.The Neuroscientist, 2025

  37. [37]

    Optuna: A next-generation hyperparameter optimization framework.ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, 2019

    Takuya Akiba, Shotaro Sano, Toshihiko Yanase, Takeru Ohta, and Masanori Koyama. Optuna: A next-generation hyperparameter optimization framework.ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, 2019

  38. [38]

    Kagan, Andy C

    Brett J. Kagan, Andy C. Kitchen, Nhi T. Tran, Forough Habibollahi, Moein Khajehnejad, Bradyn J. Parker, Anjali Bhat, Ben Rollo, Adeel Razi, and Karl J. Friston. In vitro neurons learn and exhibit sentience when embodied in a simulated game-world.Neuron, 2022

  39. [39]

    XGBoost: A Scalable Tree Boosting System.ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, 2016

    Tianqi Chen and Carlos Guestrin. XGBoost: A Scalable Tree Boosting System.ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, 2016

  40. [40]

    A unified approach to interpreting model predictions

    Scott M Lundberg and Su-In Lee. A unified approach to interpreting model predictions. Advances in Neural Information Processing Systems, 2017

  41. [41]

    Lundberg, Gabriel Erion, Hugh Chen, Alex DeGrave, Jordan M

    Scott M. Lundberg, Gabriel Erion, Hugh Chen, Alex DeGrave, Jordan M. Prutkin, Bala Nair, Ronit Katz, Jonathan Himmelfarb, Nisha Bansal, and Su-In Lee. From local explanations to global understanding with explainable AI for trees.Nature Machine Intelligence, 2020

  42. [42]

    Human-level control through deep reinforcement learning.Nature, 2015

    V olodymyr Mnih, Koray Kavukcuoglu, David Silver, Andrei A Rusu, Joel Veness, Marc G Bellemare, Alex Graves, Martin Riedmiller, Andreas K Fidjeland, Georg Ostrovski, et al. Human-level control through deep reinforcement learning.Nature, 2015

  43. [43]

    The nonparametric behrens-fisher problem: asymptotic theory and a small-sample approximation.Biometrical Journal: Journal of Mathematical Methods in Biosciences, 2000

    Edgar Brunner and Ullrich Munzel. The nonparametric behrens-fisher problem: asymptotic theory and a small-sample approximation.Biometrical Journal: Journal of Mathematical Methods in Biosciences, 2000

  44. [44]

    E. D. Adrian. The impulses produced by sensory nerve endings: Part I.The Journal of Physiology, 1926

  45. [45]

    E. D. Adrian and Yngve Zotterman. The impulses produced by sensory nerve-endings: Part II. The response of a Single End-Organ.The Journal of Physiology, 1926

  46. [46]

    E. D. Adrian and Yngve Zotterman. The impulses produced by sensory nerve endings: Part 3. Impulses set up by Touch and Pressure.The Journal of Physiology, 1926. 13 A Supplementary Materials A.1 Cell Culture and Cryopreservation hiPSC lines (CLV3.1, CLV4.4) were maintained under standard pluripotency conditions and differen- tiated into cortical neurons vi...