pith. machine review for the scientific record. sign in

arxiv: 2605.13924 · v1 · submitted 2026-05-13 · 💻 cs.NE

Recognition: 2 theorem links

· Lean Theorem

Dual-axis attribution of zebrafish tectal microcircuits for energy-efficient and robust neurocomputing

Authors on Pith no claims yet

Pith reviewed 2026-05-15 02:47 UTC · model grok-4.3

classification 💻 cs.NE
keywords zebrafish tectumretinotectal circuitspiking neural networksenergy efficiencyrobustnessbio-inspired neural networkssubcircuit ablationResNet18
0
0 comments X

The pith

Zebrafish tectal subcircuits separate into spike-efficient gates and robustness stabilizers that transfer to artificial networks.

A machine-rendered reading of the paper's core claim, the machinery that carries it, and where it could break.

The paper reconstructs a directed zebrafish retinotectal microcircuit graph and uses selective ablation inside a leaky integrate-and-fire spiking neural network to assign computational roles along two axes. The ns_TIN subcircuit maintains low spike counts while still affecting prediction error, indicating a role as an energy-saving internal gate. The superficial_TIN subcircuit shows the strongest effect on robustness when removed, indicating a feedback-like stabilizing function. These roles are then implemented as modules inside ResNet18 and tested on CIFAR-10, where each module improves the network along the axis attributed to its biological counterpart.

Core claim

By ablating specific subcircuits in a leaky integrate-and-fire spiking neural network testbed built from the zebrafish retinotectal graph, the ns_TIN subcircuit exhibits low spike footprint with measurable influence on prediction error, functioning as a spike-efficient internal information gate, whereas the superficial_TIN subcircuit exhibits the highest robustness sensitivity, functioning in a feedback-like manner to maintain system-level stability; these roles transfer successfully when implemented as modules in ResNet18-based networks evaluated on CIFAR-10.

What carries the argument

Selective ablation of predefined subcircuits in the reconstructed zebrafish-inspired retinotectal graph, quantified by Energy Sensitivity Index and Robustness Sensitivity Index inside a leaky integrate-and-fire spiking neural network testbed.

If this is right

  • The ns_TIN-inspired module preserves classification accuracy better when computation budget is reduced in ResNet18.
  • The superficial_TIN-inspired module increases robustness of ResNet18 to Gaussian noise on input images.
  • Bio-inspired architectures can be assembled from functionally attributed subcircuits rather than generic biological motifs.
  • A subcircuit-level mapping from biology to artificial networks is feasible for both efficiency and stability properties.

Where Pith is reading between the lines

These are editorial extensions of the paper, not claims the author makes directly.

  • The same dual-axis attribution method could be applied to other biological sensory circuits to extract additional modular primitives.
  • Combining both attributed modules in a single network might produce systems that are simultaneously energy-efficient and noise-robust.
  • The successful transfer from spiking testbed to non-spiking ResNet18 suggests the attributed roles are somewhat independent of exact neuron model details.

Load-bearing premise

The ablation effects observed in the artificial leaky integrate-and-fire spiking neural network testbed accurately capture the biological computational roles of the subcircuits in the zebrafish tectum.

What would settle it

Failure of the ns_TIN-inspired module to preserve ResNet18 accuracy under reduced inference budget on CIFAR-10, or failure of the superficial_TIN-inspired module to improve robustness under Gaussian noise, would falsify the attributed roles.

Figures

Figures reproduced from arXiv: 2605.13924 by Hao Zhang, Ningping Li, Yi Zhou.

Figure 1
Figure 1. Figure 1: Mesoscopic–microscopic abstraction of the zebrafish visual–motor brain graph. Microscopic cell-category nodes are arranged by functional group, and gray edges denote directed connection probabilities. The ns_TIN group is highlighted as the validated energy-efficient substructure, whereas the superficial_TIN group is highlighted as the validated robustness￾efficient substructure. softmax softmax [PITH_FULL… view at source ↗
Figure 2
Figure 2. Figure 2: Schematic illustration of the ANN transfer design. The ns_TIN-inspired module introduces adaptive gating for computation-budget reduction, whereas the superficial_TIN-inspired module introduces feedback-like refinement for robustness under Gaussian noise corruption. Li: Preprint submitted to Elsevier Page 15 of 14 [PITH_FULL_IMAGE:figures/full_fig_p016_2.png] view at source ↗
Figure 3
Figure 3. Figure 3: Directed connection matrix of the reconstructed zebrafish-inspired visual–motor brain abstraction. Rows indicate postsynaptic categories and columns indicate presynaptic categories. Nodes are grouped by neural class. Li: Preprint submitted to Elsevier Page 16 of 14 [PITH_FULL_IMAGE:figures/full_fig_p017_3.png] view at source ↗
Figure 4
Figure 4. Figure 4: SNN substructure ablation results. The ESI analysis identifies ns_TIN as the most representative internal energy-efficient candidate after excluding the visual input group, whereas the RSI analysis identifies superficial_TIN as the most robustness-efficient substructure [PITH_FULL_IMAGE:figures/full_fig_p018_4.png] view at source ↗
Figure 5
Figure 5. Figure 5: Cross-modal ANN transfer results. ResNet18WithNsTIN shows slower performance degradation under inference￾budget reduction, whereas ResNet18WithSuperficialTIN maintains higher accuracy under Gaussian noise corruption. Li: Preprint submitted to Elsevier Page 17 of 14 [PITH_FULL_IMAGE:figures/full_fig_p018_5.png] view at source ↗
read the original abstract

Biological neural circuits contain specialized substructures that support distinct computational functions, yet many bio-inspired neural networks borrow biological motifs without identifying their circuit-level origins. In this study, we investigate whether zebrafish tectal microcircuits can be attributed along two computational axes: energy-efficient information processing and robustness-preserving stabilization. We reconstruct a directed zebrafish-inspired retinotectal microcircuit graph and verify retinotectal signal propagation through dynamic simulation. A leaky integrate-and-fire spiking neural network is then used as a nonlinear perturbation testbed, where predefined subcircuits are selectively ablated and evaluated using the Energy Sensitivity Index and the Robustness Sensitivity Index.The results reveal a functional dissociation between two tectal subcircuits.The \textit{ns\_TIN} subcircuit shows a low spike footprint but a measurable influence on prediction error, suggesting a role as a spike-efficient internal information gate.In contrast, the \textit{superficial\_TIN} subcircuit produces the highest robustness sensitivity, suggesting a feedback-like role in maintaining system-level stability.We further transfer these attributed functions into ResNet18-based artificial neural networks and evaluate them on CIFAR-10 under inference-budget reduction and Gaussian noise corruption. The \textit{ns\_TIN}-inspired module improves performance preservation under reduced computation, whereas the \textit{superficial\_TIN}-inspired module improves robustness under input noise. These findings provide a subcircuit-level route for linking biological circuit organization with bio-inspired neural architecture design.

Editorial analysis

A structured set of objections, weighed in public.

Desk editor's note, referee report, simulated authors' rebuttal, and a circularity audit. Tearing a paper down is the easy half of reading it; the pith above is the substance, this is the friction.

Referee Report

3 major / 2 minor

Summary. The manuscript reconstructs a directed retinotectal microcircuit graph from zebrafish tectal data, instantiates it as a leaky integrate-and-fire spiking neural network (LIF SNN) testbed, performs selective ablations of subcircuits including ns_TIN and superficial_TIN, and defines Energy Sensitivity Index and Robustness Sensitivity Index to attribute energy-efficient gating to ns_TIN and feedback-like stabilization to superficial_TIN. These attributions are then transferred into modified ResNet18 modules, which are evaluated on CIFAR-10 under reduced inference budgets and Gaussian noise, showing improved performance preservation and robustness respectively.

Significance. If the attributions are validated, the dual-axis framework offers a subcircuit-level route for deriving targeted bio-inspired motifs for energy-efficient and robust ANNs, with potential to guide neuromorphic hardware design. The use of ablation-based sensitivity indices to dissociate roles is a methodological strength that could yield falsifiable predictions, though the significance depends on confirming that the SNN results reflect biological function rather than testbed artifacts.

major comments (3)
  1. [§3.2] §3.2 (Graph Reconstruction and SNN Instantiation): The directed retinotectal graph is mapped to the LIF SNN without an explicit neuron-to-subcircuit assignment or a defined prediction task for the SNN, which is load-bearing because the subsequent attribution of distinct roles to ns_TIN and superficial_TIN rests on this step; without it, observed differences in sensitivity indices may reflect wiring statistics rather than biological computation.
  2. [§4.2] §4.2 (Ablation Experiments and Sensitivity Indices): The Energy Sensitivity Index and Robustness Sensitivity Index are derived directly from the ablation outcomes without controls comparing targeted subcircuit removals to random removals of equivalent size or connectivity density, and without reported error bars or statistical tests; this undermines the claim of functional dissociation (low spike footprint with prediction-error influence for ns_TIN versus highest robustness sensitivity for superficial_TIN).
  3. [§5.1] §5.1 (ANN Transfer Experiments): The ns_TIN- and superficial_TIN-inspired modules are inserted into ResNet18 and evaluated on CIFAR-10 without baseline comparisons to random module insertions, standard residual blocks, or other bio-inspired mechanisms of matched parameter count; this is required to establish that the reported gains under compute reduction and noise are attributable to the transferred properties.
minor comments (2)
  1. [Abstract] The abstract states results without accompanying error bars, statistical tests, or baseline comparisons, which reduces clarity even in a summary; these should be added or referenced to the relevant tables/figures.
  2. [§3] Notation for subcircuits (ns_TIN, superficial_TIN) and the exact definitions of the two sensitivity indices should be introduced with equations in the methods section rather than first appearing in the results.

Simulated Author's Rebuttal

3 responses · 0 unresolved

We thank the referee for the constructive comments that help clarify the methodological foundations of our work. We address each major point below, providing clarifications where possible and committing to revisions that strengthen the claims without altering the core findings.

read point-by-point responses
  1. Referee: [§3.2] §3.2 (Graph Reconstruction and SNN Instantiation): The directed retinotectal graph is mapped to the LIF SNN without an explicit neuron-to-subcircuit assignment or a defined prediction task for the SNN, which is load-bearing because the subsequent attribution of distinct roles to ns_TIN and superficial_TIN rests on this step; without it, observed differences in sensitivity indices may reflect wiring statistics rather than biological computation.

    Authors: The subcircuit assignments follow directly from the anatomical labels in the source zebrafish dataset, where individual neurons are classified into ns_TIN or superficial_TIN based on their laminar position and connectivity profiles. In the SNN instantiation, each biological neuron is mapped to an LIF unit that inherits its subcircuit membership, preserving the directed edges and weights from the reconstructed graph. The testbed task is defined as faithful retinotectal signal propagation: input spikes from the retina are driven through the circuit, and prediction error is quantified as the L2 deviation in output-layer spike rates relative to the intact network. This perturbation-based approach isolates functional contributions because the ablations target biologically predefined groups rather than arbitrary partitions; wiring statistics alone would not produce the observed dissociation in sensitivity indices. We will add an explicit supplementary table listing neuron counts, layer assignments, and connectivity statistics per subcircuit to make the mapping fully transparent. revision: yes

  2. Referee: [§4.2] §4.2 (Ablation Experiments and Sensitivity Indices): The Energy Sensitivity Index and Robustness Sensitivity Index are derived directly from the ablation outcomes without controls comparing targeted subcircuit removals to random removals of equivalent size or connectivity density, and without reported error bars or statistical tests; this undermines the claim of functional dissociation (low spike footprint with prediction-error influence for ns_TIN versus highest robustness sensitivity for superficial_TIN).

    Authors: We agree that random-ablation controls and statistical reporting would further substantiate the dissociation. In the revised manuscript we will include (i) matched-size random removals (both uniform random and degree-preserving) repeated over 20 independent trials, (ii) error bars showing standard deviation across trials for both sensitivity indices, and (iii) paired t-tests comparing targeted versus random ablations. These additions will demonstrate that the low energy sensitivity of ns_TIN and high robustness sensitivity of superficial_TIN exceed what is expected from generic connectivity changes. revision: yes

  3. Referee: [§5.1] §5.1 (ANN Transfer Experiments): The ns_TIN- and superficial_TIN-inspired modules are inserted into ResNet18 and evaluated on CIFAR-10 without baseline comparisons to random module insertions, standard residual blocks, or other bio-inspired mechanisms of matched parameter count; this is required to establish that the reported gains under compute reduction and noise are attributable to the transferred properties.

    Authors: We will incorporate the requested baselines in the revised experiments. Specifically, we will report performance for (i) random module insertions of identical parameter count and placement, (ii) unmodified residual blocks, and (iii) an attention-based bio-inspired control of matched size. All variants will be evaluated under the same reduced-inference-budget and Gaussian-noise protocols. This will isolate the contribution of the ns_TIN-inspired gating motif to compute efficiency and the superficial_TIN-inspired stabilization motif to noise robustness. revision: yes

Circularity Check

0 steps flagged

No significant circularity in derivation chain

full rationale

The paper reconstructs a directed retinotectal graph from biological data, instantiates it as an LIF SNN testbed, performs selective subcircuit ablations, and computes the Energy Sensitivity Index and Robustness Sensitivity Index directly from measured changes in spike counts and prediction error. These indices are empirical observables of the simulation outcomes rather than quantities defined in terms of the claimed functional roles, so the attributions (ns_TIN as spike-efficient gate; superficial_TIN as robustness stabilizer) and the subsequent empirical transfer to ResNet18 modules on CIFAR-10 do not reduce by construction to the inputs. No self-citations, uniqueness theorems, ansatzes, or fitted parameters are invoked in a load-bearing way that would make the central dissociation tautological.

Axiom & Free-Parameter Ledger

0 free parameters · 2 axioms · 0 invented entities

The paper relies on standard neural modeling assumptions and biological connectivity data rather than introducing new free parameters or invented entities.

axioms (2)
  • domain assumption The reconstructed directed retinotectal graph faithfully represents biological connectivity.
    All subsequent simulations and attributions rest on this reconstruction.
  • domain assumption Ablation outcomes in the LIF SNN testbed reveal transferable computational roles present in the biological circuit.
    This assumption bridges the simulation results to both biological interpretation and ANN transfer.

pith-pipeline@v0.9.0 · 5567 in / 1417 out tokens · 31808 ms · 2026-05-15T02:47:15.257045+00:00 · methodology

discussion (0)

Sign in with ORCID, Apple, or X to comment. Anyone can read and Pith papers without signing in.

Lean theorems connected to this paper

Citations machine-checked in the Pith Canon. Every link opens the source theorem in the public Lean library.

What do these tags mean?
matches
The paper's claim is directly supported by a theorem in the formal canon.
supports
The theorem supports part of the paper's argument, but the paper may add assumptions or extra steps.
extends
The paper goes beyond the formal theorem; the theorem is a base layer rather than the whole result.
uses
The paper appears to rely on the theorem as machinery.
contradicts
The paper's claim conflicts with a theorem or certificate in the canon.
unclear
Pith found a possible connection, but the passage is too broad, indirect, or ambiguous to say the theorem truly supports the claim.

Reference graph

Works this paper leans on

25 extracted references · 25 canonical work pages · 3 internal anchors

  1. [1]

    Current Biology 25, 2804–2814

    Sensorimotor Decision Making in the Zebrafish Tectum. Current Biology 25, 2804–2814. doi:10.1016/j.cub. 2015.09.055. Li:Preprint submitted to ElsevierPage 12 of 14 Dual-axis attribution of zebrafish tectal microcircuits Bau, D., Zhou, B., Khosla, A., Oliva, A., Torralba, A.,

  2. [2]

    doi:10.1109/CVPR.2017

    Network Dissection: Quantifying Interpretability of Deep Visual Representations, in: 2017IEEEConferenceonComputerVisionandPatternRecognition(CVPR),IEEE,Honolulu,HI.pp.3319–3327. doi:10.1109/CVPR.2017

  3. [3]

    doi:10.3389/fnsys.2011.00101. Davies,M.,Srinivasa,N.,Lin,T.H.,Chinya,G.,Cao,Y.,Choday,S.H.,Dimou,G.,Joshi,P.,Imam,N.,Jain,S.,Liao,Y.,Lin,C.K.,Lines,A.,Liu, R.,Mathaikutty,D.,McCoy,S.,Paul,A.,Tse,J.,Venkataramanan,G.,Weng,Y.H.,Wild,A.,Yang,Y.,Wang,H.,2018. Loihi:ANeuromorphic Manycore Processor with On-Chip Learning. IEEE Micro 38, 82–99. doi:10.1109/MM.20...

  4. [4]

    Science 330, 669–673

    Filtering of Visual Information in the Tectum by an Identified Neural Circuit. Science 330, 669–673. doi:10.1126/science.1192949. DeMarco, E., Xu, N., Baier, H., Robles, E.,

  5. [5]

    Journal of Comparative Neurology 528, 1173–1188

    Neuron types in the zebrafish optic tectum labeled by anid2btransgene. Journal of Comparative Neurology 528, 1173–1188. doi:10.1002/cne.24815. Förster,D.,Helmbrecht,T.O.,Mearns,D.S.,Jordan,L.,Mokayes,N.,Baier,H.,2020.Retinotectalcircuitryoflarvalzebrafishisadaptedtodetection and pursuit of prey. eLife 9, e58596. doi:10.7554/eLife.58596. Gabriel,J.P.,Trive...

  6. [6]

    The Journal of Neuroscience 25, 9294–9303

    Visual Prey Capture in Larval Zebrafish Is Controlled by Identified Reticulospinal Neurons Downstream of the Tectum. The Journal of Neuroscience 25, 9294–9303. doi:10.1523/JNEUROSCI.2678-05.2005. Geirhos, R., Rubisch, P., Michaelis, C., Bethge, M., Wichmann, F.A., Brendel, W.,

  7. [7]

    doi:10.48550/ARXIV.1811.12231

    ImageNet-trained CNNs are biased towards texture; increasing shape bias improves accuracy and robustness. doi:10.48550/ARXIV.1811.12231. Goodfellow, I.J., Shlens, J., Szegedy, C.,

  8. [8]

    Explaining and Harnessing Adversarial Examples

    Explaining and Harnessing Adversarial Examples. doi:10.48550/ARXIV.1412.6572. He, K., Zhang, X., Ren, S., Sun, J.,

  9. [9]

    Deep Residual Learning for Image Recognition, in: 2016 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), IEEE, Las Vegas, NV, USA. pp. 770–778. doi:10.1109/CVPR.2016.90. Hendrycks,D.,Dietterich,T.,2019. BenchmarkingNeuralNetworkRobustnesstoCommonCorruptionsandPerturbations. doi:10.48550/ARXIV. 1903.12261. Hochreiter, S., Schmidhuber, J.,

  10. [10]

    Neural Computation 9(8), 1735–1780 (1997)

    Long Short-Term Memory. Neural Computation 9, 1735–1780. doi:10.1162/neco.1997.9.8.1735. Jonas,E.,Kording,K.P.,2017. CouldaNeuroscientistUnderstandaMicroprocessor? PLOSComputationalBiology13,e1005268. doi:10.1371/ journal.pcbi.1005268. Kunst, M., Laurell, E., Mokayes, N., Kramer, A., Kubo, F., Fernandes, A.M., Förster, D., Dal Maschio, M., Baier, H.,

  11. [11]

    Neuron 103, 21–38.e5

    A Cellular-Resolution Atlas of the Larval Zebrafish Brain. Neuron 103, 21–38.e5. doi:10.1016/j.neuron.2019.04.034. Maass, W.,

  12. [12]

    Science 345, 668–673

    A million spiking-neuron integrated circuit with a scalable communication network and interface. Science 345, 668–673. doi:10.1126/science.1254642. Neftci, E.O., Mostafa, H., Zenke, F.,

  13. [13]

    IEEE Signal Processing Magazine 36, 51–63

    Surrogate Gradient Learning in Spiking Neural Networks: Bringing the Power of Gradient-Based Optimization to Spiking Neural Networks. IEEE Signal Processing Magazine 36, 51–63. doi:10.1109/MSP.2019.2931595. Niell, C.M., Smith, S.J.,

  14. [14]

    Neuron 45, 941–951

    Functional Imaging Reveals Rapid Development of Visual Response Properties in the Zebrafish Tectum. Neuron 45, 941–951. doi:10.1016/j.neuron.2005.01.047. Olshausen, B.A., Field, D.J.,

  15. [15]

    Nature 381, 607–609

    Emergence of simple-cell receptive field properties by learning a sparse code for natural images. Nature 381, 607–609. doi:10.1038/381607a0. Pfeiffer, M., Pfeil, T.,

  16. [16]

    Qian, Y., Li, S., Chen, M.C., Du, X.F., Du, J.L.,

    doi:10.3389/fnins.2018.00774. Qian, Y., Li, S., Chen, M.C., Du, X.F., Du, J.L.,

  17. [17]

    doi:10.1101/2025.06.10.658856

    An Adaptive Visuomotor Transformation Reservoir Embedded in the Vertebrate Brain. doi:10.1101/2025.06.10.658856. Robles, E., Smith, S.J., Baier, H.,

  18. [18]

    Roy, K., Jaiswal, A., Panda, P.,

    doi:10.3389/fncir.2011.00001. Roy, K., Jaiswal, A., Panda, P.,

  19. [19]

    Nature 575, 607–617

    Towards spike-based machine intelligence with neuromorphic computing. Nature 575, 607–617. doi:10.1038/s41586-019-1677-2. Rueckauer, B., Lungu, I.A., Hu, Y., Pfeiffer, M., Liu, S.C.,

  20. [20]

    Rueckauer, I.-A

    doi:10.3389/fnins.2017.00682. Semmelhack, J.L., Donovan, J.C., Thiele, T.R., Kuehn, E., Laurell, E., Baier, H.,

  21. [21]

    eLife 3, e04878

    A dedicated visual pathway for prey detection in larval zebrafish. eLife 3, e04878. doi:10.7554/eLife.04878. Spoerer, C.J., McClure, P., Kriegeskorte, N.,

  22. [22]

    Szegedy, C., Zaremba, W., Sutskever, I., Bruna, J., Erhan, D., Goodfellow, I., Fergus, R.,

    doi:10.3389/fpsyg.2017.01551. Szegedy, C., Zaremba, W., Sutskever, I., Bruna, J., Erhan, D., Goodfellow, I., Fergus, R.,

  23. [23]

    Intriguing properties of neural networks

    Intriguing properties of neural networks. doi:10.48550/ARXIV.1312.6199. Tavanaei,A.,Ghodrati,M.,Kheradpisheh,S.R.,Masquelier,T.,Maida,A.,2019. Deeplearninginspikingneuralnetworks. NeuralNetworks111, 47–63. doi:10.1016/j.neunet.2018.12.002. Xie, C., Wu, Y., Maaten, L.V.D., Yuille, A.L., He, K.,

  24. [24]

    Feature Denoising for Improving Adversarial Robustness, in: 2019 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), IEEE, Long Beach, CA, USA. pp. 501–509. doi:10.1109/CVPR.2019. 00059. Li:Preprint submitted to ElsevierPage 13 of 14 Dual-axis attribution of zebrafish tectal microcircuits Yamins, D.L.K., DiCarlo, J.J.,

  25. [25]

    Nature Neuroscience 19, 356–365

    Using goal-driven deep learning models to understand sensory cortex. Nature Neuroscience 19, 356–365. doi:10.1038/nn.4244. Yin, C., Li, X., Du, J.,