pith. machine review for the scientific record. sign in

arxiv: 2604.23903 · v1 · submitted 2026-04-26 · 🧬 q-bio.NC · cs.LG

Recognition: unknown

Integrative neurocybernetic modeling in the era of large-scale neuroscience

Alfonso Renart, Auke Ijspeert, Ayesha Vermani, Daniel McNamee, Gonzalo G. de Polavieja, Il Memming Park, Joseph J. Paton, Juan \'Alvaro Gallego, Kathleen Esfahany, Matthew Dowling, Michael Orger, Shreya Saxena, Srinivas C. Turaga, Zachary Mainen

Pith reviewed 2026-05-08 04:43 UTC · model grok-4.3

classification 🧬 q-bio.NC cs.LG
keywords integrative modelingneurocyberneticsclosed-loop dynamicsdynamical systemslarge-scale neurosciencebrain-body-environmentcontrol objectives
0
0 comments X

The pith

Understanding behavior requires dynamical models that close the loop between brain, body, and environment.

A machine-rendered reading of the paper's core claim, the machinery that carries it, and where it could break.

Large-scale neuroscience produces vast datasets yet modeling stays fragmented across separate experiments and brain regions. The paper claims that progress depends on integrative neurocybernetic models: understandable dynamical systems that treat the brain as a controller pursuing latent objectives while capturing closed-loop interactions with the body and world. These models must represent structured variation across scales and combine data from recordings, behavior, perturbations, and anatomy. By pooling complementary constraints through nonlinear state-space models, meta-dynamical extensions, scalable inference, and connectomics-informed architectures, such models can deliver statistical amplification, few-shot generalization, and insight into shared dynamical principles.

Core claim

The central claim is that understanding behavior requires integrative neurocybernetic models: understandable dynamical models that capture the closed-loop coupling of brain, body and environment, treat the brain as a controller pursuing latent objectives, represent structured variation across scales, and scale to heterogeneous datasets. Such models shift the goal from predicting neural recordings in isolation to inferring the organizing principles that govern neural and behavioral dynamics.

What carries the argument

Integrative neurocybernetic models: understandable dynamical models that close the loop across brain-body-environment, treat the brain as an objective-seeking controller, represent multi-scale structured variation, and pool constraints from heterogeneous data sources.

If this is right

  • Pooling constraints from recordings, behavior, perturbations, and anatomy yields statistical amplification.
  • Models gain few-shot generalization across animals, brain areas, and behavioral contexts.
  • Shared dynamical structure and individual variation become identifiable from heterogeneous data.
  • The focus shifts from isolated prediction to inferring organizing principles of behavior.

Where Pith is reading between the lines

These are editorial extensions of the paper, not claims the author makes directly.

  • This modeling style could make it routine to test whether the same control objectives appear across species or tasks.
  • It might allow direct comparison of model-inferred objectives against behavioral data collected under novel perturbations.
  • The framework suggests that connectomics data could serve as a prior that regularizes dynamical inference even when recordings are sparse.

Load-bearing premise

Combining nonlinear state-space models, meta-dynamical extensions, scalable inference, knowledge distillation, mixed open- and closed-loop training, and connectomics-informed architectures will produce statistical amplification and mechanistic insight into shared control objectives.

What would settle it

A concrete test would be whether models built this way achieve better few-shot generalization to new animals or contexts than models trained on isolated datasets, or whether they recover consistent latent objectives across perturbations and anatomical constraints.

Figures

Figures reproduced from arXiv: 2604.23903 by Alfonso Renart, Auke Ijspeert, Ayesha Vermani, Daniel McNamee, Gonzalo G. de Polavieja, Il Memming Park, Joseph J. Paton, Juan \'Alvaro Gallego, Kathleen Esfahany, Matthew Dowling, Michael Orger, Shreya Saxena, Srinivas C. Turaga, Zachary Mainen.

Figure 1
Figure 1. Figure 1: Integrative neuroscience objectives. Model-centric integration of neural and behavioral recordings, collectively forming constraints on the joint model that can be￾have. (Left) Heterogeneous neural and behavioral data from many sources and closed￾loop experiments provide constraints. (Right) The integrative modeling framework en￾ables statistical amplification and scientific discovery. 1.4.3 Alignment and … view at source ↗
Figure 2
Figure 2. Figure 2: State-space modeling as a framework for neural dynamics. State-space models explain observed neural and behavioral time series through latent dynamical states and an observation model. The central inferential task is to recover the underlying nonlinear dynamics from partial, noisy measurements (Sec. 2.1). 2 Promising approaches We highlight a set of complementary approaches that, in our view, con￾stitute t… view at source ↗
Figure 3
Figure 3. Figure 3: Integrative model at the level of meta-dynamical system (modified from view at source ↗
read the original abstract

Large-scale neuroscience is generating rich datasets across animals, brain areas and behavioral contexts, yet our modeling efforts remains fragmented across isolated experiments. We argue that understanding behavior requires integrative neurocybernetic models: understandable dynamical models that capture the closed-loop coupling of brain, body and environment, treat the brain as a controller pursuing latent objectives, represent structured variation across scales, and scale to heterogeneous datasets. Such models shift the goal from predicting neural recordings in isolation to inferring the organizing principles that govern neural and behavioral dynamics. We outline a practical route toward this goal by combining nonlinear state-space models and meta-dynamical extensions with scalable inference, knowledge distillation, mixed open- and closed-loop training, and connectomics-informed architectures. By pooling complementary constraints from recordings, behavior, perturbations and anatomy, integrative neurocybernetic models can provide statistical amplification, few-shot generalization, and mechanistic insight into shared dynamical structure, individual variation, and the control objectives that govern behavior. This agenda offers a model-centric path from fragmented data to a mechanistic science of how brains produce behavior.

Editorial analysis

A structured set of objections, weighed in public.

Desk editor's note, referee report, simulated authors' rebuttal, and a circularity audit. Tearing a paper down is the easy half of reading it; the pith above is the substance, this is the friction.

Referee Report

2 major / 1 minor

Summary. The paper argues that fragmented modeling of large-scale neuroscience data across isolated experiments limits understanding of behavior. It proposes 'integrative neurocybernetic models' as understandable dynamical models that capture closed-loop brain-body-environment coupling, treat the brain as a controller pursuing latent objectives, represent structured multi-scale variation, and scale to heterogeneous datasets. The manuscript outlines a practical route by combining nonlinear state-space models and meta-dynamical extensions with scalable inference, knowledge distillation, mixed open- and closed-loop training, and connectomics-informed architectures, claiming this integration will yield statistical amplification, few-shot generalization, and mechanistic insight into shared dynamical structure, individual variation, and control objectives.

Significance. If the proposed combination of techniques can be implemented to deliver the claimed benefits, the work would be significant for shifting the field from isolated predictive models toward unified mechanistic frameworks that integrate complementary constraints from recordings, behavior, perturbations, and anatomy. This could enable more efficient use of large heterogeneous datasets and provide falsifiable insights into organizing principles of neural and behavioral dynamics.

major comments (2)
  1. [Abstract] Abstract: The assertion that the listed techniques 'can provide statistical amplification, few-shot generalization, and mechanistic insight' is presented as a direct outcome of the proposed route, yet the manuscript contains no derivation, toy model, simulation, or citation to prior results establishing how nonlinear state-space models with meta-dynamical extensions plus the other components interact to produce these specific benefits. This claim is load-bearing for the central agenda.
  2. [Full text] Conceptual framework (throughout): The manuscript treats the integration of closed-loop modeling, latent objectives, and connectomics-informed architectures as sufficient to overcome fragmentation, but does not address or provide evidence against potential technical obstacles such as identifiability of latent objectives in closed-loop settings or the computational feasibility of scaling the combined methods to real multi-animal datasets.
minor comments (1)
  1. [Abstract] Abstract: Grammatical error in 'our modeling efforts remains fragmented' (should be 'remain').

Simulated Author's Rebuttal

2 responses · 0 unresolved

We appreciate the referee's constructive feedback on our manuscript. The comments raise important points regarding the evidential basis for our claims and the need to address potential limitations in the proposed framework. We address each major comment below and indicate the revisions we will make to strengthen the paper.

read point-by-point responses
  1. Referee: [Abstract] Abstract: The assertion that the listed techniques 'can provide statistical amplification, few-shot generalization, and mechanistic insight' is presented as a direct outcome of the proposed route, yet the manuscript contains no derivation, toy model, simulation, or citation to prior results establishing how nonlinear state-space models with meta-dynamical extensions plus the other components interact to produce these specific benefits. This claim is load-bearing for the central agenda.

    Authors: We thank the referee for this observation. As a perspective outlining a research agenda, the manuscript does not include new empirical results, derivations, or simulations. The benefits are posited based on the synergistic combination of established techniques, where nonlinear state-space models capture dynamics, meta-dynamical extensions enable generalization across heterogeneous data, and the other elements provide additional constraints for inference. To address this, we will revise the abstract to qualify the claim as a potential outcome supported by the integration of these methods and include relevant citations to prior literature demonstrating components of these benefits (such as meta-learning for few-shot generalization in dynamical models). This will be a partial revision, as we maintain the central proposal while enhancing its justification. revision: partial

  2. Referee: [Full text] Conceptual framework (throughout): The manuscript treats the integration of closed-loop modeling, latent objectives, and connectomics-informed architectures as sufficient to overcome fragmentation, but does not address or provide evidence against potential technical obstacles such as identifiability of latent objectives in closed-loop settings or the computational feasibility of scaling the combined methods to real multi-animal datasets.

    Authors: We agree with the referee that a more complete treatment should consider these technical challenges. The manuscript emphasizes the potential advantages of the integrative approach but does not delve into counterarguments or obstacles. In the revised version, we will expand the conceptual framework section to include a discussion of these issues. Specifically, we will address the identifiability of latent objectives by referencing control-theoretic approaches that use additional constraints from behavior and perturbations, and discuss computational feasibility by noting advances in scalable inference algorithms and how connectomics-informed architectures can reduce the effective parameter space. This addition will provide a more balanced view without altering the core argument. revision: yes

Circularity Check

0 steps flagged

No significant circularity identified

full rationale

The manuscript is a position paper proposing an integrative modeling agenda without any mathematical derivations, equations, parameter fittings, or predictions. It outlines a conceptual route combining nonlinear state-space models, meta-dynamical extensions, scalable inference, knowledge distillation, mixed open- and closed-loop training, and connectomics-informed architectures to achieve statistical amplification and mechanistic insight, but presents these as forward-looking suggestions rather than results derived from prior steps within the paper. No load-bearing claims reduce to self-definitions, fitted inputs renamed as predictions, or self-citation chains. The argument is self-contained as a high-level proposal drawing on existing literature without internal circular reductions.

Axiom & Free-Parameter Ledger

0 free parameters · 2 axioms · 1 invented entities

The central claim rests on domain assumptions about data fragmentation and the efficacy of the proposed synthesis, with the integrative neurocybernetic model itself introduced as a new conceptual entity without independent evidence.

axioms (2)
  • domain assumption Large-scale neuroscience datasets are fragmented across isolated experiments and animals
    Explicitly stated as the motivation in the opening sentence of the abstract.
  • ad hoc to paper Combining the listed techniques will yield statistical amplification and mechanistic insight
    Central assertion in the final paragraph without supporting derivation or example.
invented entities (1)
  • integrative neurocybernetic models no independent evidence
    purpose: To capture closed-loop brain-body-environment dynamics and latent control objectives at multiple scales
    New term and framework introduced to organize the proposed agenda; no independent falsifiable evidence provided in the abstract.

pith-pipeline@v0.9.0 · 5548 in / 1502 out tokens · 70248 ms · 2026-05-08T04:43:59.022930+00:00 · methodology

discussion (0)

Sign in with ORCID, Apple, or X to comment. Anyone can read and Pith papers without signing in.

Reference graph

Works this paper leans on

112 extracted references · 1 canonical work pages

  1. [1]

    A uni- fied, scalable framework for neural population decoding

    Mehdi Azabou, Vinam Arora, Venkataramana Ganesh, et al. A uni- fied, scalable framework for neural population decoding. InAdvances in Neural Information Processing Systems, October 2023

  2. [2]

    Multi- session, multi-task neural decoding from distinct cell-types and brain regions

    Mehdi Azabou, Krystal Xuejing Pan, Vinam Arora, et al. Multi- session, multi-task neural decoding from distinct cell-types and brain regions. InThe Thirteenth International Conference on Learning Repre- sentations, 2024

  3. [3]

    Do deep nets really need to be deep? Advances in Neural Information Processing Systems, 27, 2014

    Jimmy Ba and Rich Caruana. Do deep nets really need to be deep? Advances in Neural Information Processing Systems, 27, 2014

  4. [4]

    Prediction of neural activ- ity in connectome-constrained recurrent networks.Nature Neuro- science, 28(12):2561–2574, December 2025

    Manuel Beiran and Ashok Litwin-Kumar. Prediction of neural activ- ity in connectome-constrained recurrent networks.Nature Neuro- science, 28(12):2561–2574, December 2025

  5. [5]

    On the oppor- tunities and risks of foundation models.arXiv [cs.LG], August 2021

    Rishi Bommasani, Drew A Hudson, Ehsan Adeli, et al. On the oppor- tunities and risks of foundation models.arXiv [cs.LG], August 2021

  6. [6]

    The tradeoffs of large scale learn- ing

    Leon Bottou and Olivier Bousquet. The tradeoffs of large scale learn- ing. InAdvances in Neural Information Processing Systems. Curran As- sociates, Inc., 2007

  7. [7]

    Dynamic models of large-scale brain activity

    Michael Breakspear. Dynamic models of large-scale brain activity. Nature Neuroscience, 20(3):340–352, February 2017

  8. [8]

    A quantitative model of conserved macroscopic dynamics predicts future motor commands

    Connor Brennan and Alexander Proekt. A quantitative model of conserved macroscopic dynamics predicts future motor commands. eLife, 8, July 2019

  9. [9]

    RT-1: Robotics transformer for real-world control at scale.arXiv [cs.RO], December 2022

    Anthony Brohan, Noah Brown, Justice Carbajal, et al. RT-1: Robotics transformer for real-world control at scale.arXiv [cs.RO], December 2022

  10. [10]

    Statistics of neu- ronal identification with open- and closed-loop measures of intrinsic excitability.Frontiers in Neural Circuits, 6:19, April 2012

    T ed Brookings, Rachel Grashow, and Eve Marder. Statistics of neu- ronal identification with open- and closed-loop measures of intrinsic excitability.Frontiers in Neural Circuits, 6:19, April 2012

  11. [11]

    Discovering governing equations from data by sparse identification of nonlinear dynamical systems.Proceedings of the National Academy of Sciences, 113(15):3932–3937, 2016

    Steven L Brunton, Joshua L Proctor, and J Nathan Kutz. Discovering governing equations from data by sparse identification of nonlinear dynamical systems.Proceedings of the National Academy of Sciences, 113(15):3932–3937, 2016

  12. [12]

    Analyzing populations of neural networks via dynam- ical model embedding.arXiv [cs.LG], February 2023

    Jordan Cotler, Kai Sheng T ai, Felipe Hernández, Blake Elias, and David Sussillo. Analyzing populations of neural networks via dynam- ical model embedding.arXiv [cs.LG], February 2023

  13. [13]

    Action suppres- sion reveals opponent parallel control via striatal circuits.Nature, 607(7919):521–526, July 2022

    Bruno F Cruz, Gonçalo Guiomar, Sofia Soares, et al. Action suppres- sion reveals opponent parallel control via striatal circuits.Nature, 607(7919):521–526, July 2022

  14. [14]

    Multilevel visuomotor con- trol of locomotion in Drosophila.Current Opinion in Neurobiology, 82: 102774, October 2023

    T omás L Cruz and M Eugenia Chiappe. Multilevel visuomotor con- trol of locomotion in Drosophila.Current Opinion in Neurobiology, 82: 102774, October 2023

  15. [15]

    ParaRNN: Unlocking parallel training of nonlinear RNNs for large language models

    Federico Danieli, Pau Rodriguez, Miguel Sarabia, Xavier Suau, and Luca Zappella. ParaRNN: Unlocking parallel training of nonlinear RNNs for large language models. InICLR, November 2025

  16. [16]

    Brain-like functional specialization emerges spontaneously in deep neural networks.Science Advances, 8(11):eabl8913, March 2022

    Katharina Dobs, Julio Martinez, Alexander J E Kell, and Nancy Kan- wisher. Brain-like functional specialization emerges spontaneously in deep neural networks.Science Advances, 8(11):eabl8913, March 2022

  17. [17]

    Range, not inde- pendence, drives modularity in biologically inspired representations

    Will Dorrell, Kyle Hsu, Luke Hollingsworth, et al. Range, not inde- pendence, drives modularity in biologically inspired representations. arXiv [q-bio.NC], April 2025

  18. [18]

    eXponential FAmily dynamical systems (XFADS): Large-scale nonlinear gaussian state-space modeling

    Matthew Dowling, Yuan Zhao, and Il Memming Park. eXponential FAmily dynamical systems (XFADS): Large-scale nonlinear gaussian state-space modeling. InAdvances in Neural Information Processing Systems, December 2024

  19. [19]

    Computation through cortical dynamics.Neuron, 98(5):873–875, June 2018

    Laura N Driscoll, Matthew D Golub, and David Sussillo. Computation through cortical dynamics.Neuron, 98(5):873–875, June 2018

  20. [20]

    the bitter lesson

    Eva Dyer and Blake Richards. Accepting “the bitter lesson” and em- bracing the brain’s complexity. The T ransmitter, March 2025. 8 Park et al.preprint

  21. [21]

    Ecker, Philipp Berens, R

    Alexander S. Ecker, Philipp Berens, R. James Cotton, et al. State dependence of noise correlations in macaque primary visual cortex. Neuron, 82(1):235–248, April 2014

  22. [22]

    A prac- tical survey on faster and lighter transformers.ACM computing sur- veys, 55(14s):1–40, December 2023

    Quentin Fournier, Gaétan Marceau Caron, and Daniel Aloise. A prac- tical survey on faster and lighter transformers.ACM computing sur- veys, 55(14s):1–40, December 2023

  23. [23]

    Dynamic representations and gen- erative models of brain function.Brain research bulletin, 54(3):275– 285, 2001

    Karl J Friston and Cathy J Price. Dynamic representations and gen- erative models of brain function.Brain research bulletin, 54(3):275– 285, 2001

  24. [24]

    Walk- ing strides direct rapid and flexible recruitment of visual circuits for course control in drosophila.Neuron, 110(13):2124–2138, July 2022

    T erufumi Fujiwara, Margarida Brotas, and M Eugenia Chiappe. Walk- ing strides direct rapid and flexible recruitment of visual circuits for course control in drosophila.Neuron, 110(13):2124–2138, July 2022

  25. [25]

    Long-term stability of cortical population dynamics underlying consistent behavior.Nature neuroscience, Jan- uary 2020

    Juan A Gallego, Matthew G Perich, Raeed H Chowdhury, Sara A Solla, and Lee E Miller. Long-term stability of cortical population dynamics underlying consistent behavior.Nature neuroscience, Jan- uary 2020

  26. [26]

    Nonlinear convergence analysis for the parareal algorithm

    Martin J Gander and Ernst Hairer. Nonlinear convergence analysis for the parareal algorithm. InLecture Notes in Computational Sci- ence and Engineering, Lecture notes in computational science and engineering, pages 45–56. Springer Berlin Heidelberg, Berlin, Hei- delberg, 2008

  27. [27]

    Inferring system and optimal control parameters of closed-loop systems from partial observations.arXiv [math.OC], pages 8006–8013, February 2025

    Victor Geadah, Juncal Arbelaiz, Harrison Ritz, et al. Inferring system and optimal control parameters of closed-loop systems from partial observations.arXiv [math.OC], pages 8006–8013, February 2025

  28. [28]

    Moving beyond generalization to accurate interpretation of flexible models.Nature machine intelli- gence, 2(11):674–683, October 2020

    Mikhail Genkin and T atiana A Engel. Moving beyond generalization to accurate interpretation of flexible models.Nature machine intelli- gence, 2(11):674–683, October 2020

  29. [29]

    A high- performance neural prosthesis enabled by control algorithm design

    Vikash Gilja, Paul Nuyujukian, Cindy A Chestek, et al. A high- performance neural prosthesis enabled by control algorithm design. Nature neuroscience, 15(12):1752–1757, 2012

  30. [30]

    Predictability enables parallelization of nonlinear state space models

    Xavier Gonzalez, Leo Kozachkov, David M Zoltowski, Kenneth L Clarkson, and Scott Linderman. Predictability enables parallelization of nonlinear state space models. InThe Thirty-ninth Annual Confer- ence on Neural Information Processing Systems, 2025

  31. [31]

    T owards scalable and stable parallelization of nonlinear RNNs

    Xavier Gonzalez, Andrew Warrington, Jimmy T H Smith, and Scott W Linderman. T owards scalable and stable parallelization of nonlinear RNNs. InAdvances in Neural Information Processing Systems, 2025

  32. [32]

    Reliable neuro- modulation from circuits with variable underlying structure.Proceed- ings of the National Academy of Sciences, 106(28):11742–11746, July 2009

    Rachel Grashow, T ed Brookings, and Eve Marder. Reliable neuro- modulation from circuits with variable underlying structure.Proceed- ings of the National Academy of Sciences, 106(28):11742–11746, July 2009

  33. [33]

    Mamba: Linear-time sequence modeling with selective state spaces.arXiv [cs.LG], December 2023

    Albert Gu and T ri Dao. Mamba: Linear-time sequence modeling with selective state spaces.arXiv [cs.LG], December 2023

  34. [34]

    Multiple mechanisms switch an electrically coupled, synaptically inhibited neuron between competing rhythmic oscillators.Neuron, 77(5):845– 858, March 2013

    Gabrielle J Gutierrez, Timothy O’Leary, and Eve Marder. Multiple mechanisms switch an electrically coupled, synaptically inhibited neuron between competing rhythmic oscillators.Neuron, 77(5):845– 858, March 2013

  35. [35]

    Time, con- trol, and the nervous system.Annual Review of Neuroscience, 48(1): 465–489, July 2025

    Caroline Haimerl, Filipe S Rodrigues, and Joseph J Paton. Time, con- trol, and the nervous system.Annual Review of Neuroscience, 48(1): 465–489, July 2025

  36. [36]

    The time is ripe to reverse engineer an entire nervous system: simulating behavior from neural interactions.arXiv [q-bio.NC], August 2023

    Gal Haspel, Ben Baker, Isabel Beets, et al. The time is ripe to reverse engineer an entire nervous system: simulating behavior from neural interactions.arXiv [q-bio.NC], August 2023

  37. [37]

    Haykin and J

    S. Haykin and J. Principe. Making sense of a complex world [chaotic events modeling].IEEE Signal Processing Magazine, 15(3):66–81, May 1998

  38. [38]

    Neural mechanisms of speed- accuracy tradeoff.Neuron, 76(3):616–628, November 2012

    Richard P Heitz and Jeffrey D Schall. Neural mechanisms of speed- accuracy tradeoff.Neuron, 76(3):616–628, November 2012

  39. [39]

    Distilling the knowl- edge in a neural network.arXiv [stat.ML], March 2015

    Geoffrey Hinton, Oriol Vinyals, and Jeff Dean. Distilling the knowl- edge in a neural network.arXiv [stat.ML], March 2015

  40. [40]

    Hochberg, Mijail D

    Leigh R. Hochberg, Mijail D. Serruya, Gerhard M. Friehs, et al. Neuronal ensemble control of prosthetic devices by a human with tetraplegia.Nature, 442(7099):164–171, July 2006

  41. [41]

    Myopic control of neural dy- namics.PLOS Computational Biology, March 2019

    David Hocker and Il Memming Park. Myopic control of neural dy- namics.PLOS Computational Biology, March 2019

  42. [42]

    Dynamical movement primitives: learning attractor models for motor behaviors.Neural computation, 25(2):328–373, February 2013

    Auke Jan Ijspeert, Jun Nakanishi, Heiko Hoffmann, Peter Pastor, and Stefan Schaal. Dynamical movement primitives: learning attractor models for motor behaviors.Neural computation, 25(2):328–373, February 2013

  43. [43]

    Re- producibility of in vivo electrophysiological measurements in mice

    International Brain Laboratory, Kush Banga, Julius Benson, et al. Re- producibility of in vivo electrophysiological measurements in mice. eLife, 13, May 2025

  44. [44]

    Dis- entangling the roles of distinct cell classes with cell-type dynamical systems

    Aditi Jha, Diksha Gupta, Carlos D Brody, and Jonathan W Pillow. Dis- entangling the roles of distinct cell classes with cell-type dynamical systems. InNeurIPS, November 2024

  45. [45]

    A generic non-invasive neuromotor interface for human-computer interaction.Nature, pages 1–10, July 2025

    Patrick Kaifosh, Thomas R Reardon, CTRL-labs at Reality Labs, et al. A generic non-invasive neuromotor interface for human-computer interaction.Nature, pages 1–10, July 2025

  46. [46]

    Kao, Paul Nuyujukian, Stephen I

    Jonathan C. Kao, Paul Nuyujukian, Stephen I. Ryu, et al. Single-trial dynamics of motor cortex and their applications to brain-machine interfaces.Nature Communications, 6:7759+, July 2015

  47. [47]

    The explanatory force of dynamical and mathematical models in neuroscience: A mechanistic perspective.Philosophy of Science, 78(4):601–627, 2011

    David Michael Kaplan and Carl F Craver. The explanatory force of dynamical and mathematical models in neuroscience: A mechanistic perspective.Philosophy of Science, 78(4):601–627, 2011

  48. [48]

    Few-shot al- gorithms for COnsistent neural decoding (FALCON) benchmark

    Brianna M Karpowicz, Joel Y e, Chaofei Fan, et al. Few-shot al- gorithms for COnsistent neural decoding (FALCON) benchmark. bioRxiv: The Preprint Server for Biology, October 2024

  49. [49]

    Spontaneous evolution of modularity and network motifs.Proceedings of the National Academy of Sciences, 102(39):13773–13778, September 2005

    Nadav Kashtan and Uri Alon. Spontaneous evolution of modularity and network motifs.Proceedings of the National Academy of Sciences, 102(39):13773–13778, September 2005

  50. [50]

    Dynamic causal modelling for EEG and MEG.Cognitive neurodynam- ics, 2(2):121–136, June 2008

    Stefan J Kiebel, Marta I Garrido, Rosalyn J Moran, and Karl J Friston. Dynamic causal modelling for EEG and MEG.Cognitive neurodynam- ics, 2(2):121–136, June 2008

  51. [51]

    Cluster- ing units in neural networks: upstream vs downstream information

    Richard D Lange, David S Rolnick, and Konrad P Kording. Cluster- ing units in neural networks: upstream vs downstream information. arXiv [cs.LG], March 2022

  52. [52]

    Connectome-constrained networks predict neural activity across the fly visual system.Nature, 634:1132–1140, 2024

    Janne K Lappalainen, Fabian T schopp, Sridhama Prakhya, et al. Connectome-constrained networks predict neural activity across the fly visual system.Nature, 634:1132–1140, 2024

  53. [53]

    Brian Lau, Tiago Monteiro, and Joseph J Paton. The many worlds hypothesis of dopamine prediction error: implications of a parallel circuit architecture in the basal ganglia.Current Opinion in Neurobiol- ogy, 46:241–247, October 2017

  54. [54]

    Multitasking recurrent networks utilize compositional strategies for control of movement.bioRxiv, page 2025.09.10.675375, September 2025

    John Lazzari and Shreya Saxena. Multitasking recurrent networks utilize compositional strategies for control of movement.bioRxiv, page 2025.09.10.675375, September 2025

  55. [55]

    Offline reinforcement learning: T utorial, review, and perspectives on open problems.arXiv [cs.LG], May 2020

    Sergey Levine, Aviral Kumar, George T ucker, and Justin Fu. Offline reinforcement learning: T utorial, review, and perspectives on open problems.arXiv [cs.LG], May 2020

  56. [56]

    Hierarchical recurrent state space models reveal discrete and continuous dynamics of neural activity in C

    Scott Linderman, Annika Nichols, David Blei, Manuel Zimmer, and Liam Paninski. Hierarchical recurrent state space models reveal discrete and continuous dynamics of neural activity in C. elegans. bioRxiv, page 621540, April 2019

  57. [57]

    What brain signals are suitable for feedback control of deep brain stimulation in parkinson’s disease? Annals of the New Y ork Academy of Sciences, 1265(1):9–24, August 2012

    Simon Little and Peter Brown. What brain signals are suitable for feedback control of deep brain stimulation in parkinson’s disease? Annals of the New Y ork Academy of Sciences, 1265(1):9–24, August 2012. 9 Park et al.preprint

  58. [58]

    ZAPBench: A benchmark for whole-brain activity prediction in ze- brafish

    Jan-Matthis Lueckmann, Alexander Immer, Alex Bo-Yuan Chen, et al. ZAPBench: A benchmark for whole-brain activity prediction in ze- brafish. InICLR, October 2024

  59. [59]

    Shenoy, and William T

    Valerio Mante, David Sussillo, Krishna V. Shenoy, and William T. Newsome. Context-dependent computation by recurrent dynam- ics in prefrontal cortex.Nature, 503(7474):78–84, November 2013

  60. [60]

    Joint modelling of brain and behaviour dynamics with artificial intelligence.Nature reviews

    Mackenzie Weygandt Mathis and Alexander Mathis. Joint modelling of brain and behaviour dynamics with artificial intelligence.Nature reviews. Neuroscience, pages 1–14, December 2025

  61. [61]

    Deep neuroethol- ogy of a virtual rodent.arXiv [q-bio.NC], November 2019

    Josh Merel, Diego Aldarondo, Jesse Marshall, et al. Deep neuroethol- ogy of a virtual rodent.arXiv [q-bio.NC], November 2019

  62. [62]

    Connectome-constrained latent variable model of whole-brain neural activity

    Lu Mi, Richard Xu, Sridhama Prakhya, et al. Connectome-constrained latent variable model of whole-brain neural activity. InInternational Conference on Learning Representations (ICLR), 2022

  63. [63]

    Jonathan A Michaels, Stefan Schaffelhofer, Andres Agudelo-T oro, and Hansjörg Scherberger. A goal-driven modular neural network predicts parietofrontal neural dynamics during grasping.Proceedings of the National Academy of Sciences of the United States of America, 117(50):32124–32135, December 2020

  64. [64]

    A rubric for human-like agents and NeuroAI.Philo- sophical Transactions of the Royal Society of London

    Ida Momennejad. A rubric for human-like agents and NeuroAI.Philo- sophical Transactions of the Royal Society of London. Series B, Biological Sciences, 378(1869):20210446, January 2023

  65. [65]

    Neurodynamical comput- ing at the information boundaries of intelligent systems.Cognitive computation, 16(5):1–13, 2024

    Joseph D Monaco and Grace M Hwang. Neurodynamical comput- ing at the information boundaries of intelligent systems.Cognitive computation, 16(5):1–13, 2024

  66. [66]

    The neural compu- tation of affective internal states in the hypothalamus: A dynamical systems perspective.Neuron, 113(23):3887–3907, December 2025

    Aditya Nair, Amit Vinograd, Mengyu Liu, et al. The neural compu- tation of affective internal states in the hypothalamus: A dynamical systems perspective.Neuron, 113(23):3887–3907, December 2025

  67. [67]

    Continual learning via local module composition

    Oleksiy Ostapenko, Pau Rodriguez, Massimo Caccia, and Laurent Charlin. Continual learning via local module composition. InAd- vances in Neural Information Processing, November 2021

  68. [68]

    Individual variabil- ity of neural computations underlying flexible decisions.Nature, 639 (8054):421–429, 2025

    Marino Pagan, Vincent D T ang, Mikio C Aoi, et al. Individual variabil- ity of neural computations underlying flexible decisions.Nature, 639 (8054):421–429, 2025

  69. [69]

    Infer- ring single-trial neural population dynamics using sequential auto- encoders.Nat

    Chethan Pandarinath, Daniel J O’Shea, Jasmine Collins, et al. Infer- ring single-trial neural population dynamics using sequential auto- encoders.Nat. Methods, 15(10):805–815, October 2018

  70. [70]

    Ferreira, et al

    Liam Paninski, Y ashar Ahmadian, Daniel Gil G. Ferreira, et al. A new look at state-space models for neural data.Journal of computational neuroscience, 29(1-2):107–126, August 2010

  71. [71]

    Neural latents benchmark ’21: Evaluating latent variable models of neural population activ- ity

    Felix Pei, Joel Y e, David Zoltowski, et al. Neural latents benchmark ’21: Evaluating latent variable models of neural population activ- ity. InAdvances in Neural Information Processing Systems, Track on Datasets and Benchmarks, September 2021

  72. [72]

    A neural manifold view of the brain.Nature neuroscience, pages 1–16, July 2025

    Matthew G Perich, Devika Narain, and Juan A Gallego. A neural manifold view of the brain.Nature neuroscience, pages 1–16, July 2025

  73. [73]

    Connectome simulations identify a central pattern generator circuit for fly walking

    Sarah M Pugliese, Grant M Chou, Elliott T T Abe, et al. Connectome simulations identify a central pattern generator circuit for fly walking. bioRxiv, page 2025.09.12.675944, 2025

  74. [74]

    A generalist agent

    Scott Reed, Konrad Żołna, Emilio Parisotto, et al. A generalist agent. Transactions on Machine Learning Research, August 2022

  75. [75]

    Rezende, Shakir Mohamed, and Daan Wierstra

    Danilo J. Rezende, Shakir Mohamed, and Daan Wierstra. Stochas- tic backpropagation and approximate inference in deep generative models. InInternational Conference on Machine Learning, May 2014

  76. [76]

    Syntactic compo- sition in neural systems, 2026

    Reidar Riveland, Alex Pouget, and Peter E Latham. Syntactic compo- sition in neural systems, 2026. Computational and Systems Neuro- science (COSYNE), poster 2-016

  77. [77]

    A reduction of imitation learning and structured prediction to no-regret online learning

    Stephane Ross, Geoffrey Gordon, and Drew Bagnell. A reduction of imitation learning and structured prediction to no-regret online learning. InProceedings of the Fourteenth International Conference on Artificial Intelligence and Statistics, pages 627–635. JMLR Workshop and Conference Proceedings, June 2011

  78. [78]

    New Y ork, Wiley, 1952

    W Ross Ashby.Design for a Brain. New Y ork, Wiley, 1952

  79. [79]

    Learning nonlinear dynami- cal systems using the expectation-maximization algorithm

    Sam Roweis and Zoubin Ghahramani. Learning nonlinear dynami- cal systems using the expectation-maximization algorithm. In Simon Haykin, editor,Kalman filtering and neural networks, pages 175–220. John Wiley & Sons, Inc, 2001

  80. [80]

    Princeton University Press, December 2025

    Nicole C Rust.Elusive cures: Why neuroscience hasn’t solved brain disorders-and how we can change that. Princeton University Press, December 2025. ISBN 9780691243078

Showing first 80 references.