pith. machine review for the scientific record. sign in

arxiv: 2604.26654 · v1 · submitted 2026-04-29 · 💻 cs.NE

Recognition: unknown

Evolutionary feature selection for spiking neural network pattern classifiers

Marco Castelani, Michal Valko, Nuno C. Marques

Pith reviewed 2026-05-07 12:39 UTC · model grok-4.3

classification 💻 cs.NE
keywords evolutionary algorithmsfeature selectionspiking neural networksJASTAPpattern classificationIRIS datasetneural network training
0
0 comments X

The pith

Extending evolutionary feature selection to JASTAP spiking networks permits smaller models that classify noisy data accurately.

A machine-rendered reading of the paper's core claim, the machinery that carries it, and where it could break.

This paper extends an evolutionary procedure for feature selection and training, used before on multi-layer perceptrons, to the JASTAP biologically realistic spiking neural network model for classification tasks. Tests on the IRIS dataset provide evidence that this allows smaller networks to handle noisier data while keeping the same classification accuracy. Readers might care if they seek practical ways to apply spiking networks in noisy real-world settings where smaller size reduces computational demands. The work demonstrates that biologically inspired models can adopt optimization techniques from conventional neural networks.

Core claim

The paper establishes that applying the evolutionary feature selection and training procedure to the JASTAP model results in smaller neural networks capable of classifying the IRIS data set with no loss in accuracy even when the data is noisier.

What carries the argument

The JASTAP neural network model together with an evolutionary algorithm for simultaneous feature selection and network parameter optimization.

If this is right

  • Smaller networks achieve equivalent accuracy on the IRIS classification task.
  • The networks tolerate higher noise levels in input data.
  • The method integrates feature selection directly into the evolutionary training process.
  • JASTAP serves as a practical alternative to multi-layer perceptrons for pattern classification.

Where Pith is reading between the lines

These are editorial extensions of the paper, not claims the author makes directly.

  • This approach could extend to other spiking neural network variants for improved scalability.
  • Validation on diverse datasets would test if the noise tolerance generalizes beyond the IRIS benchmark.
  • Potential for reduced training time or energy use in embedded classification systems follows if smaller networks prove reliable.

Load-bearing premise

The evolutionary procedure from multi-layer perceptrons transfers directly to the JASTAP spiking model without modification and delivers performance benefits on noisy data.

What would settle it

Observing degraded accuracy when using the smaller JASTAP networks on the IRIS dataset with added noise or on other standard classification benchmarks would disprove the central claim.

Figures

Figures reproduced from arXiv: 2604.26654 by Marco Castelani, Michal Valko, Nuno C. Marques.

Figure 1
Figure 1. Figure 1: Postsynaptic potential (PSP) with different waveform inter alia. -1 -0.5 0 0.5 1 -10 -5 0 5 10 2 π · atan(x) view at source ↗
Figure 2
Figure 2. Figure 2: Limiting non–linear function weights). Most of ANN weight training procedures are based on gradient descent of the error surface, so they are prone to sub–optimal convergence to local minima. Global search techniques such as evolutionary algorithms (EAs) are known to produce more robust results when pursuing multi–objective optimization in large, noisy, multimodal and deceptive search spaces, such as this … view at source ↗
Figure 3
Figure 3. Figure 3: Example of encoding data to temporal code: first iris–setosa training example (i.e. [5.1, 3.5, 1.4, 0.2] scaled to [7.2, 11.3, 5.7, 5.4]). Each row represents an input to 1 of 4 input neurons. Data are scaled to 5–15 ms and repeated over time period of 300 ms. Every interspike–interval is noised with ±1 ms of Gamma noise. name source size features classes training set Iris UCI ML 150 4 3 80 % – random TABL… view at source ↗
Figure 4
Figure 4. Figure 4: Iris network FeaSTAP2 column. 2) Handling the noise: JASTAP model was designed to be as detailed as it is needed to simulate biorealistic functions in reasonable way. With this in mind, we have conducted experiments to test the noise handling during classification (please see subsection IV-B). To keep on being biologically inspired we have used the noise generated from Gamma distribution. The noise value i… view at source ↗
read the original abstract

This paper presents an application of the biologically realistic JASTAP neural network model to classification tasks. The JASTAP neural network model is presented as an alternative to the basic multi-layer perceptron model. An evolutionary procedure previously applied to the simultaneous solution of feature selection and neural network training on standard multi-layer perceptrons is extended with JASTAP model. Preliminary results on IRIS standard data set give evidence that this extension allows the use of smaller neural networks that can handle noisier data without any degradation in classification accuracy.

Editorial analysis

A structured set of objections, weighed in public.

Desk editor's note, referee report, simulated authors' rebuttal, and a circularity audit. Tearing a paper down is the easy half of reading it; the pith above is the substance, this is the friction.

Referee Report

2 major / 1 minor

Summary. The manuscript extends a previously developed evolutionary procedure for simultaneous feature selection and training of multi-layer perceptrons to the JASTAP biologically realistic spiking neural network model. It applies this to classification tasks and reports preliminary results on the standard IRIS dataset as evidence that the extension permits smaller networks that maintain classification accuracy while handling noisier data.

Significance. If the central claims can be substantiated through detailed methods, baselines, and explicit noise experiments, the work would offer a modest but useful contribution to evolutionary optimization of spiking networks for pattern classification. It could support development of more compact, robust classifiers suitable for neuromorphic hardware. The preliminary nature and lack of verification details currently limit broader impact.

major comments (2)
  1. Abstract: The claim that the JASTAP extension 'allows the use of smaller neural networks that can handle noisier data without any degradation in classification accuracy' is load-bearing but unsupported. Only results on the clean standard IRIS dataset are referenced; no noise model, corruption procedure, or comparative accuracy results under added noise are described, creating an evidentiary gap between the reported experiments and the noise-robustness conclusion.
  2. Abstract and methods description: No details are supplied on the evolutionary algorithm parameters, JASTAP network architectures used, performance metrics with error bars, baseline comparisons against standard MLPs or other classifiers, or data exclusion rules. These omissions prevent independent verification of the reported smaller networks and accuracy maintenance, which are central to the paper's contribution.
minor comments (1)
  1. The abstract would benefit from quantifying the network size reduction achieved and stating the exact classification accuracy values obtained.

Simulated Author's Rebuttal

2 responses · 0 unresolved

We thank the referee for the constructive feedback. We agree that the abstract claims require stronger substantiation and that methodological details must be expanded for reproducibility. We will perform a major revision to address both points.

read point-by-point responses
  1. Referee: Abstract: The claim that the JASTAP extension 'allows the use of smaller neural networks that can handle noisier data without any degradation in classification accuracy' is load-bearing but unsupported. Only results on the clean standard IRIS dataset are referenced; no noise model, corruption procedure, or comparative accuracy results under added noise are described, creating an evidentiary gap between the reported experiments and the noise-robustness conclusion.

    Authors: We acknowledge the evidentiary gap. The current experiments use only the standard clean IRIS dataset, and the abstract claim about noisier data is an extrapolation from the biologically realistic properties of JASTAP rather than direct evidence. In the revision we will add explicit noise-robustness experiments (e.g., additive Gaussian noise at varying levels to input features, with a defined corruption procedure) and report comparative classification accuracies (with error bars) for the evolved smaller networks versus baseline networks. The abstract will be revised to reflect these new results. revision: yes

  2. Referee: Abstract and methods description: No details are supplied on the evolutionary algorithm parameters, JASTAP network architectures used, performance metrics with error bars, baseline comparisons against standard MLPs or other classifiers, or data exclusion rules. These omissions prevent independent verification of the reported smaller networks and accuracy maintenance, which are central to the paper's contribution.

    Authors: We agree that these omissions limit verifiability. The revised manuscript will expand the methods section to specify: evolutionary algorithm parameters (population size, generations, mutation/crossover rates, fitness function); JASTAP network architectures (neuron counts per layer, spike-timing parameters, connectivity); performance metrics reported as means with standard deviations or error bars across repeated runs; baseline comparisons against standard MLPs (and optionally SVM or other classifiers) using identical feature-selection and training protocols; and any IRIS data preprocessing or exclusion rules. The abstract will be updated for precision and to reference the added experiments. revision: yes

Circularity Check

0 steps flagged

No circularity detected in derivation or claims

full rationale

The paper extends a previously developed evolutionary procedure for feature selection and training from multi-layer perceptrons to the JASTAP spiking model, then reports preliminary empirical results on the standard IRIS dataset. No derivation chain, equations, or first-principles predictions are presented that reduce by construction to the inputs themselves. The central claim rests on experimental outcomes rather than self-definition, fitted parameters renamed as predictions, or load-bearing self-citations whose validity depends on the current work. Self-citation of prior procedure is normal and does not bear the load here, as the new results are independent observations.

Axiom & Free-Parameter Ledger

0 free parameters · 0 axioms · 0 invented entities

Only the abstract is available, so no free parameters, axioms, or invented entities can be extracted or audited from the text.

pith-pipeline@v0.9.0 · 5374 in / 1024 out tokens · 43635 ms · 2026-05-07T12:39:05.230089+00:00 · methodology

discussion (0)

Sign in with ORCID, Apple, or X to comment. Anyone can read and Pith papers without signing in.

Reference graph

Works this paper leans on

21 extracted references

  1. [1]

    McGraw-Hill Higher Education (1997)

    Mitchell, T.M.: Machine Learning. McGraw-Hill Higher Education (1997)

  2. [2]

    Journal of Applied Logic2(2004) 241–243

    Garcez, A., Gabbay, D., H ¨olldobler, S., Taylor, J.: Editorial. Journal of Applied Logic2(2004) 241–243

  3. [3]

    Journal of Applied Logic2(2004) 245 – 272

    Hitzler, P., H ¨olldobler, S., Seda, A.K.: Logic programs and connectionist networks. Journal of Applied Logic2(2004) 245 – 272

  4. [4]

    second edn

    Arbib, M., ed.: The Handbook of Brain Theory and Neural Networks. second edn. MIT Press (2003)

  5. [5]

    Parallel distributed processing: explorations in the microstructure of cognition, vol

    Rumelhart, D.E., Hinton, G.E., Williams, R.J.: Learning internal representations by error propagation. Parallel distributed processing: explorations in the microstructure of cognition, vol. 1: foundations (1986) 318–362

  6. [6]

    Computers and Artificial Intelligence13(1994) 603–620

    Janco, J., Stavrovsky, I., Pavlasek, J.: Modeling of neuronal functions: A neuronlike element with the graded response. Computers and Artificial Intelligence13(1994) 603–620

  7. [7]

    V olume 1

    Maass, W., Bishop, C.M., eds.: Pulsed Neural Networks. V olume 1. MIT Press, Cambridge, MA, USA (1999)

  8. [8]

    CEN- TRIA Internal technical report (2005)

    Castellani, M., Marques, N.C.: A technical report on the evolutionary feature selection for artificial neural network pattern classifiers. CEN- TRIA Internal technical report (2005)

  9. [9]

    Neurocomputing48 (2002) 17–37

    Bohte, S.M., Kok, J.N., Poutr ´e, J.A.L.: Error-backpropagation in temporally encoded networks of spiking neurons. Neurocomputing48 (2002) 17–37

  10. [10]

    J Physiol (Lond)343(1983) 117–133

    Redman, S., Walmsley, B.: The time course of synaptic potentials evoked in cat spinal motoneurones at identified group ia synapses. J Physiol (Lond)343(1983) 117–133

  11. [11]

    Biologia, Bratislava 56(2001) 591–604

    Pavlasek, J., Jenca, J.: Temporal coding and recognition of uncued tem- poral patterns in neuronal spike trains: biologically plausible network of coincidence detectors and coordinated time delays. Biologia, Bratislava 56(2001) 591–604

  12. [12]

    Acta Neurobiol Exp (Wars)63(2003) 83–98

    Pavlasek, J., Jenca, J., Harman, R.: Rate coding: neurobiological network performing detection of the difference between mean spiking rates. Acta Neurobiol Exp (Wars)63(2003) 83–98

  13. [13]

    Master’s thesis, Comenius University, Bratislava, Slovakia (2005)

    Valko, M.: Evolving neural networks for statistical decision theory. Master’s thesis, Comenius University, Bratislava, Slovakia (2005)

  14. [14]

    Oxford University Press (1998)

    Koch, C.: Biophysics of Computation: Information Processing in Single Neurons (Computational Neuroscience). Oxford University Press (1998)

  15. [15]

    Gerstner, W.: Time structure of the activity in neural network models. Phys. Rev. E51(1995) 738–758

  16. [16]

    Singer, W.: Time as coding space? Curt. Op. Neurobiol.9(1999) 189– 194

  17. [17]

    MIT Press, Cambridge, MA, USA (1999)

    Abbott, L., Sejnowski, T.J.: Neural codes and distributed representations: foundations of neural computation. MIT Press, Cambridge, MA, USA (1999)

  18. [18]

    Hettich, C.B., Merz, C.: UCI repository of machine learning databases (1998)

    S. Hettich, C.B., Merz, C.: UCI repository of machine learning databases (1998)

  19. [19]

    Springer-Verlag, New York (1996)

    Fishman, G.S.: Monte–Carlo — concepts algorithms and applications. Springer-Verlag, New York (1996)

  20. [20]

    Springer, Lecture Notes in Computer Science2189(2001) 63–72

    Marques, N.C., Lopes, G.P.: Tagging with small training corpora. Springer, Lecture Notes in Computer Science2189(2001) 63–72

  21. [21]

    EPIA’05-12th Portuguese Conference on Artificial Intelligence, Amilcar Cardoso, Gael Dias, Carlos Bento (eds.), Springer, Guarda, Portugal (2005)

    Castellani, M., Marques N.: Automatic Detection of Meddies through Texture Analysis of Sea Surface Temperature Maps. EPIA’05-12th Portuguese Conference on Artificial Intelligence, Amilcar Cardoso, Gael Dias, Carlos Bento (eds.), Springer, Guarda, Portugal (2005)