Recognition: 2 theorem links
· Lean TheoremEmbodied Neurocomputation: A Framework for Interfacing Biological Neural Cultures with Scaled Task-Driven Validation
Pith reviewed 2026-05-14 18:49 UTC · model grok-4.3
The pith
Biological neural networks outperform optimized silicon agents in navigation tasks through optimized interfacing parameters.
A machine-rendered reading of the paper's core claim, the machinery that carries it, and where it could break.
Core claim
The authors show that their Embodied Neurocomputation framework, applied to optimizing parameters for a biological neural network agent in a closed-loop odor-gradient navigation task, identifies configurations that achieve significantly superior performance compared to optimized deep Q-network agents given identical interaction budgets, after evaluating thousands of parameter sets across extensive real-time simulations.
What carries the argument
The Embodied Neurocomputation framework, a systems-level method for optimizing encoding and decoding interfaces between biological neural cultures and silicon hardware through task-driven parameter searches in simulated environments.
If this is right
- BNN agents can exhibit reliable learning in navigation tasks when encoding parameters are properly selected.
- The framework enables systematic comparison and benchmarking of biological versus silicon computing performance.
- Hybrid bio-silicon systems become feasible for adaptive, goal-oriented tasks like robotic navigation.
- Task-driven validation scales to identify effective configurations despite large search spaces.
Where Pith is reading between the lines
- If the simulation accurately models real BNN dynamics, these optimized configurations could be transferred to physical experiments for validation.
- Similar optimization approaches might reveal advantages for BNNs in other domains requiring energy efficiency or adaptability.
- Extending the framework to physical robot control could demonstrate real-world utility of living neural cultures for computation.
Load-bearing premise
The simulated grid-world with odor-style gradient captures the essential learning and interaction dynamics of real biological neural cultures.
What would settle it
Testing the top-performing configurations on actual biological neural cultures in a physical navigation setup and finding they do not outperform DQN agents under equivalent conditions.
Figures
read the original abstract
Biological neural networks (BNNs) have been established as a powerful and adaptive substrate that offer the potential for incredibly energy and data efficient information processing with distinct learning mechanisms. Yet a core challenge to utilizing BNN for neurocomputation is determining the optimal encoding and decoding mechanisms between the traditional silicon computing interface and the living biology. Here, we propose an Embodied Neurocomputation framework as a systems-level approach to this multi-variable optimization encoding/decoding problem. We operationalize this approach through the first large-scale parameter optimization of encoding configurations for a BNN agent performing closed-loop navigation along an odor-style gradient in a simulated grid-world. Despite the relative simplicity of the task, the biological interactions gave rise to a massive multi-combinatorial search space for optimal parameters. By considering how the components of the system are interconnected and parameterized, we evaluated approximately 1,300 parameter combinations, over 4,000 hours of real-time agent-environment interactions, to identify 12 configurations that consistently demonstrated learning across multiple episodes. These configurations achieved significantly higher task performances than optimized silicon-based DQN agents under the same interaction budget. These findings represent an initial step toward robust and scalable goal-oriented learning using BNNs. Our framework establishes a foundation for applying task-driven neurocomputing and supports the development of field-wide benchmarks. In the long term, this work supports the development of hybrid bio-silicon architectures capable of efficient, adaptive and real-time computation, including the potential for robotic control applications.
Editorial analysis
A structured set of objections, weighed in public.
Referee Report
Summary. The manuscript proposes an Embodied Neurocomputation framework for optimizing encoding and decoding interfaces between biological neural networks (BNNs) and silicon systems. It reports a combinatorial search over approximately 1,300 encoding configurations for a BNN agent performing closed-loop navigation along an odor-style gradient in a simulated grid-world, accumulating over 4,000 hours of interactions. From this search the authors identify 12 configurations that exhibit consistent learning across episodes and claim these achieve significantly higher task performance than optimized DQN agents under the same interaction budget. The work positions the framework as an initial step toward scalable, task-driven validation and benchmarks for hybrid bio-silicon neurocomputation.
Significance. If the performance claims are supported by complete methodological reporting and statistical analysis, the scale of the parameter exploration would represent a useful contribution to the development of systematic approaches for BNN interfacing. The identification of multiple successful configurations and the direct comparison to a silicon baseline could help establish benchmarks for goal-oriented learning with biological substrates, supporting longer-term goals of efficient hybrid architectures for adaptive computation.
major comments (2)
- [Abstract] Abstract: The central claim that the 12 BNN configurations 'achieved significantly higher task performances than optimized silicon-based DQN agents under the same interaction budget' is load-bearing but unsupported by any description of the DQN optimization procedure. No hyperparameter grid (architecture depth/width, learning rates, replay buffer size, exploration schedule), total training steps or episodes allocated to the baseline, or confirmation of equivalent combinatorial search effort is provided. Without these details the outperformance cannot be distinguished from possible under-optimization of the DQN agents.
- [Abstract] Abstract and Results sections: The abstract states results from 1,300 combinations and 4,000 hours but supplies no information on the exact parameter space explored, how the 1,300 combinations were generated or sampled, exclusion criteria for the 12 successful configurations, error bars on performance metrics, or any statistical tests (e.g., t-tests or ANOVA) supporting the 'significantly higher' claim. These omissions prevent verification of the reported findings.
minor comments (1)
- [Abstract] Abstract: The phrase 'real-time agent-environment interactions' is used for what are described as simulated experiments; clarify whether this refers to wall-clock time of the simulation or simulated time steps.
Simulated Author's Rebuttal
We thank the referee for the constructive feedback. We address each major comment below and have revised the manuscript to improve transparency and verifiability of the claims.
read point-by-point responses
-
Referee: [Abstract] Abstract: The central claim that the 12 BNN configurations 'achieved significantly higher task performances than optimized silicon-based DQN agents under the same interaction budget' is load-bearing but unsupported by any description of the DQN optimization procedure. No hyperparameter grid (architecture depth/width, learning rates, replay buffer size, exploration schedule), total training steps or episodes allocated to the baseline, or confirmation of equivalent combinatorial search effort is provided. Without these details the outperformance cannot be distinguished from possible under-optimization of the DQN agents.
Authors: We agree that the abstract does not include these details and that this weakens the central claim as presented. In the revised manuscript we have added a concise summary to the abstract describing the DQN baseline: a grid search over network depths (2-4 layers), widths (64-512 units), learning rates (1e-4 to 1e-2), replay buffer sizes (10k-100k transitions), and epsilon-greedy schedules, with total training episodes and steps matched exactly to the BNN interaction budget and equivalent combinatorial effort applied to the baseline. revision: yes
-
Referee: [Abstract] Abstract and Results sections: The abstract states results from 1,300 combinations and 4,000 hours but supplies no information on the exact parameter space explored, how the 1,300 combinations were generated or sampled, exclusion criteria for the 12 successful configurations, error bars on performance metrics, or any statistical tests (e.g., t-tests or ANOVA) supporting the 'significantly higher' claim. These omissions prevent verification of the reported findings.
Authors: We acknowledge these omissions in the abstract and results presentation. The revised manuscript now includes: (i) explicit enumeration of the parameter space (spike encoding rates, decoding window lengths, stimulation amplitudes, and interface mappings, yielding the ~1,300 valid combinations via systematic enumeration within biologically feasible bounds); (ii) sampling method (full combinatorial enumeration, not random sampling); (iii) exclusion criteria (no consistent performance gain across at least three episodes); (iv) error bars (standard error of the mean); and (v) statistical support (paired t-tests with p < 0.01 for the 12 configurations versus DQN). These additions have been incorporated into both the abstract and Results sections. revision: yes
Circularity Check
No circularity: empirical results rest on independent experimental evaluation
full rationale
The paper reports an experimental optimization over ~1,300 encoding/decoding parameter combinations for BNN agents in a grid-world navigation task, selecting 12 configurations that showed learning and outperformed DQN baselines under matched interaction budgets. No equations, derivations, or first-principles predictions are presented that reduce to fitted inputs by construction. No self-citations are invoked to establish uniqueness theorems, ansatzes, or load-bearing premises for the central claims. The DQN comparison is an external benchmark whose optimization details may warrant scrutiny for fairness, but this does not create internal circularity. The work is self-contained against external benchmarks and contains no self-definitional, renaming, or fitted-prediction reductions.
Axiom & Free-Parameter Ledger
free parameters (1)
- encoding/decoding configuration parameters
axioms (1)
- domain assumption Biological neural networks have been established as a powerful and adaptive substrate that offer the potential for incredibly energy and data efficient information processing with distinct learning mechanisms.
Lean theorems connected to this paper
-
IndisputableMonolith/Cost/FunctionalEquation.leanwashburn_uniqueness_aczel unclear?
unclearRelation between the paper passage and the cited Recognition theorem.
We model neurocomputation f(·;θt) ... yt = [d(·;θd) ◦ b(·;θb,t) ◦ e(·;θe)](xt) ... evaluated by Score=Metric(yt)
-
IndisputableMonolith/Foundation/RealityFromDistinction.leanreality_from_one_distinction unclear?
unclearRelation between the paper passage and the cited Recognition theorem.
evaluated approximately 1,300 parameter combinations ... 12 configurations that consistently demonstrated learning
What do these tags mean?
- matches
- The paper's claim is directly supported by a theorem in the formal canon.
- supports
- The theorem supports part of the paper's argument, but the paper may add assumptions or extra steps.
- extends
- The paper goes beyond the formal theorem; the theorem is a base layer rather than the whole result.
- uses
- The paper appears to rely on the theorem as machinery.
- contradicts
- The paper's claim conflicts with a theorem or certificate in the canon.
- unclear
- Pith found a possible connection, but the passage is too broad, indirect, or ambiguous to say the theorem truly supports the claim.
Reference graph
Works this paper leans on
-
[1]
Energy and Policy Considerations for Deep Learning in NLP.arXiv, 2019
Emma Strubell, Ananya Ganesh, and Andrew McCallum. Energy and Policy Considerations for Deep Learning in NLP.arXiv, 2019
2019
-
[2]
Asen Mehonic and Anthony J. Kenyon. Brain-inspired computing needs a master plan.Nature, 2022
2022
-
[3]
The economy of brain network organization.Nature Reviews Neuroscience, 2012
Ed Bullmore and Olaf Sporns. The economy of brain network organization.Nature Reviews Neuroscience, 2012
2012
-
[4]
Wagenaar, Radhika Madhavan, Jerome Pine, and Steve M
Daniel A. Wagenaar, Radhika Madhavan, Jerome Pine, and Steve M. Potter. Controlling Bursting in Cortical Cultures with Closed-Loop Multi-Electrode Stimulation.The Journal of Neuroscience, 2005
2005
-
[5]
Pastore, and Paolo Massobrio
Daniele Poli, Vito P. Pastore, and Paolo Massobrio. Functional connectivity in in vitro neuronal assemblies.Frontiers in Neural Circuits, 2015
2015
-
[6]
Dimensionality reduction for large-scale neural record- ings.Nature neuroscience, 2014
John P Cunningham and Byron M Yu. Dimensionality reduction for large-scale neural record- ings.Nature neuroscience, 2014
2014
-
[7]
Neural manifolds for the control of movement.Neuron, 2017
Juan A Gallego, Matthew G Perich, Lee E Miller, and Sara A Solla. Neural manifolds for the control of movement.Neuron, 2017
2017
-
[8]
Learning in Networks of Cortical Neurons.The Journal of Neuroscience, 2001
Goded Shahaf and Shimon Marom. Learning in Networks of Cortical Neurons.The Journal of Neuroscience, 2001
2001
-
[9]
Breakdown of Modularity in Complex Networks.Frontiers in Physiology, 2017
Sergi Valverde. Breakdown of Modularity in Complex Networks.Frontiers in Physiology, 2017
2017
-
[10]
Satoshi Moriya, Hideaki Yamamoto, Ayumi Hirano-Iwata, Shigeru Kubota, and Shigeo Sato. Quantitative Analysis of Dynamical Complexity in Cultured Neuronal Network Models for Reservoir Computing Applications.International Joint Conference on Neural Networks (IJCNN), 2019
2019
-
[11]
Johnson, Brett J
Dowlette-Mary Alam El Din, Leah Moenkemoeller, Alon Loeffler, Forough Habibollahi, Jack Schenkman, Amitav Mitra, Tjitse Van Der Molen, Lixuan Ding, Jason Laird, Maren Schenke, Erik C. Johnson, Brett J. Kagan, Thomas Hartung, and Lena Smirnova. Human neural organoid microphysiological systems show the building blocks necessary for basic learning and memory...
2025
-
[12]
Kagan, Daniela Duc, Ian Stevens, and Frederic Gilbert
Brett J. Kagan, Daniela Duc, Ian Stevens, and Frederic Gilbert. Neurons Embodied in a Virtual World: Evidence for Organoid Ethics?AJOB Neuroscience, 2022
2022
-
[13]
Benedikt Maurer, Vaiva Vasiliauskait˙e, Julian Hengsteler, Gino Cathomen, Tobias Ruff, Cedric Schmid, János Vörös, and Stephan J. Ihle. Reinforcement learning for closed-loop optimisation of spatiotemporal stimulation in patterned neuronal networks.bioRxiv, 2026
2026
-
[14]
Schweiger, Kwaku Dad Abu-Bonsrah, Brad Watmuff, Azin Azadi, Sergey Pryshchep, Karthikeyan Narayanan, Christopher Puleo, Kannathal Natarajan, Mohammed A
Md Sayed Tanveer, Dhruvik Patel, Hunter E. Schweiger, Kwaku Dad Abu-Bonsrah, Brad Watmuff, Azin Azadi, Sergey Pryshchep, Karthikeyan Narayanan, Christopher Puleo, Kannathal Natarajan, Mohammed A. Mostajo-Radji, Brett J. Kagan, and Ge Wang. Starting a synthetic biological intelligence lab from scratch.Patterns, 2025
2025
-
[15]
Arndt, Martin Vingron, and Yechiel Elkabetz
Daniel Rosebrock, Sneha Arora, Naresh Mutukula, Rotem V olkman, Elzbieta Gralinska, Anasta- sios Balaskas, Amèlia Aragonés Hernández, René Buschow, Björn Brändl, Franz-Josef Müller, Peter F. Arndt, Martin Vingron, and Yechiel Elkabetz. Enhanced cortical neural stem cell identity through short SMAD and WNT inhibition in human cerebral organoids facilitates...
2022
-
[16]
Kwaku Dad Abu-Bonsrah, Candice Desouza, Forough Habibollahi, Hui Wen Chan, Brad Watmuff, Mirella Dottori, and Brett J. Kagan. A novel protocol for the efficient generation of all three major hippocampal neuronal sub-populations from human pluripotent stem cells.bioRxiv, 2026. 11
2026
-
[17]
Brett J. Kagan. Two roads diverged: Pathways toward harnessing intelligence in neural cell cultures.Cell Biomaterials, 2025
2025
-
[18]
Brett J. Kagan. The CL1 as a platform technology to leverage biological neural system functions. Nature Reviews Bioengineering, 2025
2025
-
[19]
Neural Computers.arXiv, 2026
Mingchen Zhuge, Changsheng Zhao, Haozhe Liu, Zijian Zhou, Shuming Liu, Wenyi Wang, Ernie Chang, Gael Le Lan, Junjie Fei, Wenxuan Zhang, Yasheng Sun, Zhipeng Cai, Zechun Liu, Yunyang Xiong, Yining Yang, Yuandong Tian, Yangyang Shi, Vikas Chandra, and Jürgen Schmidhuber. Neural Computers.arXiv, 2026
2026
-
[20]
Evolutionary training and abstraction yields algorithmic generalization of neural computers.Nature Machine Intelligence, 2020
Daniel Tanneberg, Elmar Rueckert, and Jan Peters. Evolutionary training and abstraction yields algorithmic generalization of neural computers.Nature Machine Intelligence, 2020
2020
-
[21]
Brain organoid reservoir computing for artificial intelligence
Hongwei Cai, Zheng Ao, Chunhui Tian, Zhuhao Wu, Hongcheng Liu, Jason Tchieu, Mingxia Gu, Ken Mackie, and Feng Guo. Brain organoid reservoir computing for artificial intelligence. Nature Electronics, 2023
2023
-
[22]
Biological neurons act as generalization filters in reservoir computing.Proceedings of the National Academy of Sciences, 2023
Takuma Sumi, Hideaki Yamamoto, Yuichi Katori, Koki Ito, Satoshi Moriya, Tomohiro Konno, Shigeo Sato, and Ayumi Hirano-Iwata. Biological neurons act as generalization filters in reservoir computing.Proceedings of the National Academy of Sciences, 2023
2023
-
[23]
Schweiger, Sebastian Hernandez, Alex Spaeth, Kateryna V oitiuk, David F
Ash Robbins, Hunter E. Schweiger, Sebastian Hernandez, Alex Spaeth, Kateryna V oitiuk, David F. Parks, Tjitse Van Der Molen, Jinghui Geng, Isabel Cline, Kenneth S. Kosik, Sofie R. Salama, Tal Sharf, Mohammed A. Mostajo-Radji, David Haussler, and Mircea Teodorescu. Goal-directed learning in cortical organoids.Cell Reports, 2026
2026
-
[24]
Moein Khajehnejad, Forough Habibollahi, Alon Loeffler, Aswin Paul, Adeel Razi, and Brett J. Kagan. Dynamic Network Plasticity and Sample Efficiency in Neural Cultures: A Comparison with Deep Learning.Cyborg and Bionic Systems, 2025
2025
-
[25]
Biological neurons vs deep reinforcement learning: Sample efficiency in a simulated game-world.NeurIPS Workshop: Memory in Artificial and Real Intelligence (MemARI) Workshop, 2022
Forough Habibollahi, Amitesh Gaurav, Moein Khajehnejad, and Brett J Kagan. Biological neurons vs deep reinforcement learning: Sample efficiency in a simulated game-world.NeurIPS Workshop: Memory in Artificial and Real Intelligence (MemARI) Workshop, 2022
2022
-
[26]
Graph-based representation learning of neuronal dynamics and behavior.arXiv, 2024
Moein Khajehnejad, Forough Habibollahi, Ahmad Khajehnejad, Chris French, Brett J Kagan, and Adeel Razi. Graph-based representation learning of neuronal dynamics and behavior.arXiv, 2024
2024
-
[27]
Human neural organoid microphysiological systems show the building blocks necessary for basic learning and memory.Communications Biology, 2025
Dowlette-Mary Alam El Din, Leah Moenkemoeller, Alon Loeffler, Forough Habibollahi, Jack Schenkman, Amitav Mitra, Tjitse Van Der Molen, Lixuan Ding, Jason Laird, Maren Schenke, et al. Human neural organoid microphysiological systems show the building blocks necessary for basic learning and memory.Communications Biology, 2025
2025
-
[28]
Microelectrode arrays cultured with in vitro neural networks for motion control tasks: Encoding and decoding progress and advances.Microsystems & Nanoengineering, 2025
Sihan Hua, Yaoyao Liu, Jinping Luo, Shangchen Li, Longhui Jiang, Pei Wu, Shutong Sun, Li Shang, Chengji Lu, Kui Zhang, Juntao Liu, Mixia Wang, Huaizhang Shi, and Xinxia Cai. Microelectrode arrays cultured with in vitro neural networks for motion control tasks: Encoding and decoding progress and advances.Microsystems & Nanoengineering, 2025
2025
-
[29]
Quian Quiroga, Z
R. Quian Quiroga, Z. Nadasdy, and Y . Ben-Shaul. Unsupervised Spike Detection and Sorting with Wavelets and Superparamagnetic Clustering.Neural Computation, 2004
2004
-
[30]
Kilosort: realtime spike-sorting for extracellular electrophysiology with hundreds of channels.BioRxiv, 2016
Marius Pachitariu, Nicholas Steinmetz, Shabnam Kadir, Matteo Carandini, and Harris Ken- neth D. Kilosort: realtime spike-sorting for extracellular electrophysiology with hundreds of channels.BioRxiv, 2016
2016
-
[31]
A fully automated approach to spike sorting.Neuron, 2017
Jason E Chung, Jeremy F Magland, Alex H Barnett, Vanessa M Tolosa, Angela C Tooker, Kye Y Lee, Kedar G Shah, Sarah H Felix, Loren M Frank, and Leslie F Greengard. A fully automated approach to spike sorting.Neuron, 2017
2017
-
[32]
Automated spike sorting using density grid contour clustering and subtractive waveform decomposition.Journal of neuroscience methods, 2007
Carlos Vargas-Irwin and John P Donoghue. Automated spike sorting using density grid contour clustering and subtractive waveform decomposition.Journal of neuroscience methods, 2007. 12
2007
-
[33]
Neuro-Inspired Visual Pattern Recognition via Biological Reservoir Computing.arXiv, 2026
Luca Ciampi, Ludovico Iannello, Fabrizio Tonelli, Gabriele Lagani, Angelo Di Garbo, Federico Cremisi, and Giuseppe Amato. Neuro-Inspired Visual Pattern Recognition via Biological Reservoir Computing.arXiv, 2026
2026
-
[34]
Kagan, Valentina Baccetti, Brian D
Brett J. Kagan, Valentina Baccetti, Brian D. Earp, J. Lomax Boyd, Julian Savulescu, and Adeel Razi. A quantifiable information-processing hierarchy provides a necessary condition for detecting agency.arXiv, 2026
2026
-
[35]
Jordan, Martin Kutter, Jean-Marc Comby, Flora Brozzi, and Ewelina Kurtys
Fred D. Jordan, Martin Kutter, Jean-Marc Comby, Flora Brozzi, and Ewelina Kurtys. Open and remotely accessible Neuroplatform for research in wetware computing.Frontiers in Artificial Intelligence, 2024
2024
-
[36]
Brett J. Kagan, Forough Habibollahi, Brad Watmuff, Azin Azadi, Finn Doensen, Alon Loeffler, Seung Hoon Byun, Bram Servais, Candice Desouza, Kwaku Dad Abu-Bonsrah, and Nicole Kerlero de Rosbo. Harnessing Intelligence from Brain Cells In Vitro.The Neuroscientist, 2025
2025
-
[37]
Optuna: A next-generation hyperparameter optimization framework.ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, 2019
Takuya Akiba, Shotaro Sano, Toshihiko Yanase, Takeru Ohta, and Masanori Koyama. Optuna: A next-generation hyperparameter optimization framework.ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, 2019
2019
-
[38]
Kagan, Andy C
Brett J. Kagan, Andy C. Kitchen, Nhi T. Tran, Forough Habibollahi, Moein Khajehnejad, Bradyn J. Parker, Anjali Bhat, Ben Rollo, Adeel Razi, and Karl J. Friston. In vitro neurons learn and exhibit sentience when embodied in a simulated game-world.Neuron, 2022
2022
-
[39]
XGBoost: A Scalable Tree Boosting System.ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, 2016
Tianqi Chen and Carlos Guestrin. XGBoost: A Scalable Tree Boosting System.ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, 2016
2016
-
[40]
A unified approach to interpreting model predictions
Scott M Lundberg and Su-In Lee. A unified approach to interpreting model predictions. Advances in Neural Information Processing Systems, 2017
2017
-
[41]
Lundberg, Gabriel Erion, Hugh Chen, Alex DeGrave, Jordan M
Scott M. Lundberg, Gabriel Erion, Hugh Chen, Alex DeGrave, Jordan M. Prutkin, Bala Nair, Ronit Katz, Jonathan Himmelfarb, Nisha Bansal, and Su-In Lee. From local explanations to global understanding with explainable AI for trees.Nature Machine Intelligence, 2020
2020
-
[42]
Human-level control through deep reinforcement learning.Nature, 2015
V olodymyr Mnih, Koray Kavukcuoglu, David Silver, Andrei A Rusu, Joel Veness, Marc G Bellemare, Alex Graves, Martin Riedmiller, Andreas K Fidjeland, Georg Ostrovski, et al. Human-level control through deep reinforcement learning.Nature, 2015
2015
-
[43]
The nonparametric behrens-fisher problem: asymptotic theory and a small-sample approximation.Biometrical Journal: Journal of Mathematical Methods in Biosciences, 2000
Edgar Brunner and Ullrich Munzel. The nonparametric behrens-fisher problem: asymptotic theory and a small-sample approximation.Biometrical Journal: Journal of Mathematical Methods in Biosciences, 2000
2000
-
[44]
E. D. Adrian. The impulses produced by sensory nerve endings: Part I.The Journal of Physiology, 1926
1926
-
[45]
E. D. Adrian and Yngve Zotterman. The impulses produced by sensory nerve-endings: Part II. The response of a Single End-Organ.The Journal of Physiology, 1926
1926
-
[46]
E. D. Adrian and Yngve Zotterman. The impulses produced by sensory nerve endings: Part 3. Impulses set up by Touch and Pressure.The Journal of Physiology, 1926. 13 A Supplementary Materials A.1 Cell Culture and Cryopreservation hiPSC lines (CLV3.1, CLV4.4) were maintained under standard pluripotency conditions and differen- tiated into cortical neurons vi...
1926
discussion (0)
Sign in with ORCID, Apple, or X to comment. Anyone can read and Pith papers without signing in.