pith. machine review for the scientific record. sign in

arxiv: 2604.21602 · v1 · submitted 2026-04-23 · 💻 cs.NE · cs.AI· cs.AR· cs.ET· cs.LG

Recognition: unknown

On the Role of Preprocessing and Memristor Dynamics in Reservoir Computing for Image Classification

Authors on Pith no claims yet

Pith reviewed 2026-05-08 12:59 UTC · model grok-4.3

classification 💻 cs.NE cs.AIcs.ARcs.ETcs.LG
keywords reservoir computingvolatile memristorsimage classificationMNISTpreprocessingdevice variabilityneuromorphic computingparallel delayed feedback
0
0 comments X

The pith

Volatile memristors in a delayed-feedback reservoir reach 95.89 percent MNIST accuracy and stay robust at 20 percent device variability.

A machine-rendered reading of the paper's core claim, the machinery that carries it, and where it could break.

The paper investigates how preprocessing choices and the natural time-dependent behavior of volatile memristors shape the performance of a parallel delayed feedback network reservoir computing system on image classification. It demonstrates that decay rates, quantization effects, and variability can be turned into useful computational resources rather than obstacles when the input data are suitably prepared. A sympathetic reader would care because reservoir computing avoids the heavy training costs of conventional neural networks and memristors promise compact, low-power hardware, so any method that makes them reliable for real tasks moves neuromorphic systems closer to practical use. The work focuses on the MNIST digit set and shows that the approach matches top reported memristor-based results while tolerating substantial device imperfections.

Core claim

In the parallel delayed feedback network architecture, volatile memristors supply the recurrent dynamics while preprocessing steps improve how spatial image data are mapped into the reservoir; when decay rate, quantization, and variability are modeled explicitly, the system achieves 95.89 percent classification accuracy on MNIST and retains up to 94.2 percent accuracy under 20 percent device variability, showing that such memristors can perform reliable spatio-temporal processing for neuromorphic hardware.

What carries the argument

The parallel delayed feedback network (PDFN) reservoir computing architecture driven by volatile memristors, where the devices' intrinsic decay and state evolution replace explicit recurrent weights and preprocessing converts static images into suitable temporal inputs.

If this is right

  • Volatile memristors can serve as the core building blocks for compact, high-speed neuromorphic systems that process spatio-temporal data without large training overhead.
  • Device variability need not be eliminated if the reservoir design exploits the dynamics rather than fighting them.
  • Preprocessing methods that convert images into temporal streams become a practical lever for boosting reservoir performance in hardware implementations.
  • The approach supports energy-efficient image recognition tasks where conventional deep networks would require more resources.

Where Pith is reading between the lines

These are editorial extensions of the paper, not claims the author makes directly.

  • The same combination of preprocessing and device-aware design could be tested on other static image datasets or simple time-series tasks to check generality beyond MNIST.
  • Hardware prototypes would reveal whether simulation-to-reality gaps in memristor decay or variability require additional calibration steps.
  • If the PDFN structure scales, it might reduce the total number of devices needed compared with fully digital reservoir implementations.

Load-bearing premise

The modeled memristor behaviors for decay, quantization, and variability accurately match those of physical devices and the reported accuracy is not the result of extensive tuning on the test set.

What would settle it

Fabricate the PDFN circuit with actual volatile memristors, apply the same preprocessing, and measure MNIST classification accuracy; if the result falls well below 94 percent or loses robustness at 20 percent variability, the central claim does not hold.

Figures

Figures reproduced from arXiv: 2604.21602 by David Saad, Duna Wattad, Rishona Daniels, Ronny Ronen, Shahar Kvatinsky.

Figure 1
Figure 1. Figure 1: Block diagram of RC with input, reservoir, and readout layers. Only readout weights ( view at source ↗
Figure 2
Figure 2. Figure 2: Network structures of (a) echo state network (ESN), (b) liquid state machine (LSM), (c) delayed feedback net view at source ↗
Figure 3
Figure 3. Figure 3: Evolution of internal state variable x of the dynamic memristor (based on (5)-(7)) under different input pulse trains: (a) binary inputs where the applied voltage alternates between 1.5V (logic ‘1’) and 0V (logic ‘0’); and (b) analog inputs where the applied voltage varies continuously between 0.8V and 1.8V (input sequence has values between 0 and 0.5 and are scaled to the write voltage range of the memris… view at source ↗
Figure 4
Figure 4. Figure 4: Schematic of parallel delayed feedback network (PDFN) reservoir computing (RC). Each row of the binarized view at source ↗
Figure 5
Figure 5. Figure 5: Schematics of preprocessing methods for image processing. (a) One-dimensional preprocessing: each horizontal view at source ↗
Figure 6
Figure 6. Figure 6: Test accuracy across various preprocessing methods. Data shown for 5-bit quantized memristors with view at source ↗
Figure 7
Figure 7. Figure 7: Test accuracy versus quantization levels for different section lengths. Data shown is for 2D + parity with view at source ↗
Figure 8
Figure 8. Figure 8: Test accuracy versus time decay rate, τ , for different section lengths for 2D + parity with quantization 5-bit view at source ↗
Figure 9
Figure 9. Figure 9: Test accuracy versus time decay rate, τ , for different number of memristor quantization levels for 2D + parity with 7 sections. 22 view at source ↗
Figure 10
Figure 10. Figure 10: Final state x of a single volatile memristor versus decay rate τ for all possible sequences with four write pulses view at source ↗
Figure 11
Figure 11. Figure 11: Test accuracy versus quantization levels for different variability conditions for 2D + parity with 7 sections and view at source ↗
read the original abstract

Reservoir computing (RC) is an emerging recurrent neural network architecture that has attracted growing attention for its low training cost and modest hardware requirements. Memristor-based circuits are particularly promising for RC, as their intrinsic dynamics can reduce network size and parameter overhead in tasks such as time-series prediction and image recognition. Although RC has been demonstrated with several memristive devices, a comprehensive evaluation of device-level requirements remains limited. In this paper, we analyze and explain the operation of a parallel delayed feedback network (PDFN) RC architecture with volatile memristors, focusing on how device characteristics -- such as decay rate, quantization, and variability -- affect reservoir performance. We further discuss strategies to improve data representation in the reservoir using preprocessing methods and suggest potential improvements. The proposed approach achieves 95.89% classification accuracy on MNIST, comparable with the best reported memristor-based RC implementations. Furthermore, the method maintains high robustness under 20% device variability, achieving an accuracy of up to 94.2%. These results demonstrate that volatile memristors can support reliable spatio-temporal information processing and reinforce their potential as key building blocks for compact, high-speed, and energy-efficient neuromorphic computing systems.

Editorial analysis

A structured set of objections, weighed in public.

Desk editor's note, referee report, simulated authors' rebuttal, and a circularity audit. Tearing a paper down is the easy half of reading it; the pith above is the substance, this is the friction.

Referee Report

1 major / 3 minor

Summary. The manuscript analyzes a parallel delayed feedback network (PDFN) reservoir computing architecture implemented with volatile memristors for image classification tasks such as MNIST. It examines how memristor device properties—including decay rate, quantization, and variability—influence reservoir dynamics and performance, proposes preprocessing techniques to enhance input data representation in the reservoir, and reports simulation results achieving 95.89% classification accuracy on MNIST with robustness to 20% device variability yielding up to 94.2% accuracy. The work concludes that these findings support the use of volatile memristors for reliable spatio-temporal processing in compact neuromorphic systems.

Significance. If the simulation results hold under realistic hardware conditions, the paper would provide useful guidance on device requirements and preprocessing for memristor-based RC, reinforcing the potential of volatile memristors to enable low-overhead, energy-efficient neuromorphic hardware for image recognition. The emphasis on explaining the role of specific device characteristics (decay, quantization, variability) could help bridge device physics and system-level design. However, the absence of any hardware measurements or model validation against fabricated devices substantially reduces the immediate significance, as the central claims rest on unverified simulation fidelity.

major comments (1)
  1. [Results and Discussion (around the accuracy and robustness subsections)] The headline performance claims (95.89% MNIST accuracy and 94.2% under 20% variability) are obtained exclusively from simulations of a modeled PDFN reservoir whose state evolution depends on parameterized decay rate, quantization, and variability. No section provides comparison of these modeled I-V curves, retention statistics, or variability distributions to measured data from physical volatile memristors, which directly undermines the assertion that the results demonstrate reliable hardware operation.
minor comments (3)
  1. [Abstract and Introduction] The abstract and introduction would benefit from explicit statements that all results are simulation-based rather than measured on hardware, to avoid potential misinterpretation of the robustness claims.
  2. [Methods / Architecture Description] Notation for the PDFN state update equation and the preprocessing pipeline could be clarified with a single consolidated diagram or pseudocode block, as the current description leaves the exact mapping from pixel data to reservoir inputs ambiguous.
  3. [Figures in Results section] Several figures showing reservoir state trajectories or accuracy vs. variability curves lack error bars or multiple-run statistics, making it difficult to judge the statistical significance of the reported robustness.

Simulated Author's Rebuttal

1 responses · 1 unresolved

We thank the referee for the constructive feedback on our manuscript. We address the major comment below and have revised the text to better reflect the simulation-based nature of the study while preserving the value of the parameter analysis.

read point-by-point responses
  1. Referee: [Results and Discussion (around the accuracy and robustness subsections)] The headline performance claims (95.89% MNIST accuracy and 94.2% under 20% variability) are obtained exclusively from simulations of a modeled PDFN reservoir whose state evolution depends on parameterized decay rate, quantization, and variability. No section provides comparison of these modeled I-V curves, retention statistics, or variability distributions to measured data from physical volatile memristors, which directly undermines the assertion that the results demonstrate reliable hardware operation.

    Authors: We agree that the reported accuracies (95.89% and up to 94.2% under variability) are obtained from simulations using parameterized models of volatile memristor dynamics, as described in the Methods and Results sections. The manuscript does not include experimental hardware measurements or direct comparisons of modeled I-V curves, retention, or variability to fabricated devices. We will revise the abstract, introduction, and Results/Discussion sections to explicitly state that the work is simulation-based, to moderate claims (e.g., changing 'demonstrate that volatile memristors can support reliable... operation' to 'suggest the potential for volatile memristors to support...'), and to add a limitations paragraph emphasizing the need for future experimental validation. The core contribution remains the analysis of how decay rate, quantization, and variability affect PDFN reservoir performance, which provides design guidance even without hardware data in this study. revision: partial

standing simulated objections not resolved
  • Direct comparison of the parameterized models to measured I-V curves, retention statistics, or variability distributions from physical volatile memristor devices, as this study is limited to simulations.

Circularity Check

0 steps flagged

No circularity: performance metrics arise from independent simulation of modeled dynamics on MNIST

full rationale

The paper reports classification accuracy obtained by simulating a parallel delayed feedback network reservoir whose state evolution follows explicitly modeled volatile memristor equations (decay, quantization, variability). These accuracies are computed outputs of the forward simulation on fixed test data, not quantities defined by or fitted to the same outputs. No equations, uniqueness theorems, or self-citations are invoked that would make the reported 95.89 % or 94.2 % figures tautological with the input model parameters or preprocessing choices. The central claims therefore remain externally falsifiable against physical device measurements and do not reduce to their own inputs by construction.

Axiom & Free-Parameter Ledger

0 free parameters · 0 axioms · 0 invented entities

Based solely on the abstract, no explicit free parameters, axioms, or invented entities are introduced; the work relies on standard reservoir-computing models and memristor device characteristics drawn from prior literature.

pith-pipeline@v0.9.0 · 5538 in / 1127 out tokens · 22760 ms · 2026-05-08T12:59:41.778380+00:00 · methodology

discussion (0)

Sign in with ORCID, Apple, or X to comment. Anyone can read and Pith papers without signing in.

Reference graph

Works this paper leans on

50 extracted references

  1. [1]

    The "Echo State

    H. Jaeger, "The "Echo State" Approach to Analysing and Training Recurrent Neural Networks", GMD Report 148, German National Research Center for Information Technology, 2001

  2. [2]

    Real-time computing without stable states: A new framework for neural computation based on perturbations,

    W. Maass, T. Natschläger, and H. Markram, "Real-time computing without stable states: A new framework for neural computation based on perturbations," Neural Computation, Vol. 14, No. 11, pp. 2531–2560, November 2002

  3. [3]

    Real-Time Photonic Deep Reservoir Computing for Speech Recognition,

    E. Picco and S. Massar, "Real-Time Photonic Deep Reservoir Computing for Speech Recognition," 2023 International Joint Conference on Neural Networks (IJCNN), Gold Coast, Australia, 2023, pp. 1-7

  4. [4]

    Hopf physical reservoir computer for reconfigurable sound recognition,

    Md R. E. U. Shougat, et al., “Hopf physical reservoir computer for reconfigurable sound recognition,” Scientific Reports, Vol. 13, Art. no. 8719, May 2023

  5. [5]

    Real-time detection of epileptic seizures in animal models using reservoir computing,

    P. Buteneers, et al., “Real-time detection of epileptic seizures in animal models using reservoir computing,” Epilepsy Research, Vol. 103, No. 2–3, pp. 124–134, February 2013

  6. [6]

    Multifunctional physical reservoir computing in soft tensegrity robots,

    R. Terajima, K. Inoue, K. Nakajima, and Y. Kuniyoshi, “Multifunctional physical reservoir computing in soft tensegrity robots,” Chaos: An Interdisciplinary Journal of Nonlinear Science, Vol. 35, No. 8, Art. no. 083111, August 2025

  7. [7]

    A combination forecasting model of wind speed based on decomposition,

    Z. Tian, H. Li, and F. Li, “A combination forecasting model of wind speed based on decomposition,” Energy Reports, Vol. 7, pp. 1217–1233, November 2021

  8. [8]

    Stock market index prediction based on reservoir computing models,

    W.-J. Wang, Y. Tang, J. Xiong, and Y.-C. Zhang, “Stock market index prediction based on reservoir computing models,” Expert Systems with Applications, Vol. 178, pp. 115022, September 2021

  9. [9]

    Reservoir computing with untrained convolutional neural networks for image recognition,

    Z. Tong and G. Tanaka, "Reservoir computing with untrained convolutional neural networks for image recognition," Proceedings of the International Conference on Pattern Recognition (ICPR), pp. 1289-1294, Beijing, China, August 2018

  10. [10]

    High speed human action recognition using a photonic reservoir computer,

    E. Picco, P. Antonik, and S. Massar, “High speed human action recognition using a photonic reservoir computer,” Neural Networks, Vol. 165, pp. 662–675, August 2023

  11. [11]

    Application of next-generation reservoir computing for predicting chaotic systems from partial observations,

    L. Ratas and K. Pyragas, “Application of next-generation reservoir computing for predicting chaotic systems from partial observations,” Physical Review E, Vol. 109, p. 064215, June 2024

  12. [12]

    An experimental unification of reservoir computing methods,

    D. Verstraeten, B. Schrauwen, M. D’Haene, and D. Stroobandt, “An experimental unification of reservoir computing methods,” Neural Networks, Vol. 20, No. 3, pp. 391–403, 2007

  13. [13]

    Advances in photonic reservoir computing,

    G. Van der Sande, D. Brunner, and M. C. Soriano, “Advances in photonic reservoir computing,” Nanophotonics, Vol. 6, No. 3, pp. 561–576, May 2017

  14. [14]

    Physical reservoir computing with emerging electronics,

    X. Liang, et al., “Physical reservoir computing with emerging electronics,” Nature Electronics, Vol. 7, pp. 193–206, March 2024

  15. [15]

    Physical reservoir computing with origami and its application to robotic crawling,

    P. Bhovad and S. Li, “Physical reservoir computing with origami and its application to robotic crawling,” Nature Communications, 2021

  16. [16]

    Spintronic reservoir computing without driving current or magnetic field,

    T. Taniguchi, A. Ogihara, Y. Utsumi, and S. Tsunegi, “Spintronic reservoir computing without driving current or magnetic field,” Scientific Reports, Vol. 12, no. 10627, 2022

  17. [17]

    In-memory and in-sensor reservoir computing with memristive devices,

    N. Lin et al., “In-memory and in-sensor reservoir computing with memristive devices,” APL Machine Learning, Vol. 2, No. 1, Art. no. 010901, 2024

  18. [18]

    Reservoir computing using dynamic memristors for temporal information processing,

    C. Du et al., “Reservoir computing using dynamic memristors for temporal information processing,” Nature Communications, Vol. 8, No. 1, p. 2204, December 2017

  19. [19]

    Reservoir computing approaches to recurrent neural network training,

    M. Lukoševičius and H. Jaeger, “Reservoir computing approaches to recurrent neural network training,” Computer Science Review, Vol. 3, No. 3, pp. 127–149, 2009

  20. [20]

    Harnessing nonlinearity: Predicting chaotic systems and saving energy in wireless communication,

    H. Jaeger and H. Haas, “Harnessing nonlinearity: Predicting chaotic systems and saving energy in wireless communication,” Science, Vol. 304, No. 5667, pp. 78–80, 2004

  21. [21]

    In materia reservoir computing with a fully memristive architecture based on self-organizing nanowire networks,

    G. Milano et al., "In materia reservoir computing with a fully memristive architecture based on self-organizing nanowire networks," Nature Materials, Vol. 21, No. 2, pp. 195–202, 2022

  22. [22]

    Information processing using a single dynamical node as complex system,

    L. Appeltant, et al., "Information processing using a single dynamical node as complex system," Nature Communications, Vol. 2, No. 468, pp. 1–6, 2011

  23. [23]

    Memristor-The missing circuit element,

    L. Chua, "Memristor-The missing circuit element," IEEE Transactions on Circuit Theory, Vol. 18, No. 5, pp. 507-519, September 1971

  24. [24]

    The missing memristor found,

    D. B. Strukov, G. S. Snider, D. R. Stewart, and R. S. Williams, “The missing memristor found,” Nature, Vol. 453, No. 7191, pp. 80–83, May 2008

  25. [25]

    Memristive devices and systems,

    L. O. Chua and S. M. Kang, "Memristive devices and systems," Proceedings of the IEEE, Vol. 64, No. 2, pp. 209–223, February 1976

  26. [26]

    Resistance switching memories are memristors,

    L. O. Chua, "Resistance switching memories are memristors," Applied Physics A, Vol. 102, No. 4, pp. 765–783, March 2011

  27. [27]

    If it's pinched, it's a memristor,

    L. O. Chua, "If it's pinched, it's a memristor," Semiconductor Science and Technology, Vol. 29, No. 10, 104001, September 2014

  28. [28]

    Optically Controlled MoS2 Phase Conversion Memory-Based In-Sensor Computing Enables Higher Information Security,

    J. Wang et al., “Optically Controlled MoS2 Phase Conversion Memory-Based In-Sensor Computing Enables Higher Information Security,” ACS Photonics, Vol. 12, No. 12, pp. 6946–6956, 2025

  29. [29]

    Flexible TiO2-WO3-x hybrid memristor with enhanced linearity and synaptic plasticity for precise weight tuning in neuromorphic computing,

    J. Pan et al., “Flexible TiO2-WO3-x hybrid memristor with enhanced linearity and synaptic plasticity for precise weight tuning in neuromorphic computing,” npj Flexible Electronics, Vol. 8, Art. No. 70, October 2024

  30. [30]

    Picosecond Femtojoule Resistive Switching in Nanoscale VO2 Memristors,

    S. W. Schmid et al., “Picosecond Femtojoule Resistive Switching in Nanoscale VO2 Memristors,” ACS Nano, Vol. 18, No. 33, pp. 21966–21974, 2024

  31. [31]

    VTEAM: A general model for voltage-controlled memristors,

    S. Kvatinsky, M. Ramadan, E. G. Friedman and A. Kolodny, "VTEAM: A general model for voltage-controlled memristors," IEEE Transactions on Circuits and Systems II: Express Briefs, Vol. 62, No. 8, pp. 786-790, August 2015

  32. [32]

    A neuromorphic event data interpretation approach with hardware reservoir,

    H. Li, D. Kumar, and N. El‑Atab, “A neuromorphic event data interpretation approach with hardware reservoir,” Frontiers in Neuroscience, Vol. 18, Art. no. 1467935, November 2024

  33. [33]

    V-VTEAM: A compact behavioral model for volatile memristors,

    T. Patni, R. Daniels, and S. Kvatinsky, "V-VTEAM: A compact behavioral model for volatile memristors," 2024 IEEE International Flexible Electronics Technology Conference (IFETC), Bologna, Italy, pp. 1-4, 2024

  34. [34]

    Synaptic behaviors and modeling of a metal oxide memristive device,

    T. Chang, et al., "Synaptic behaviors and modeling of a metal oxide memristive device," Applied Physics A: Materials Science & Processing, Vol. 102, pp. 857–863, February 2011

  35. [35]

    A faithful and compact diffusive memristor model,

    T. Wang et al., "A faithful and compact diffusive memristor model," IEEE Transactions on Circuits and Systems for Artificial Intelligence, Vol. 1, No. 2, pp. 141-148, December 2024

  36. [36]

    Hardware implementation of echo state networks using memristor double crossbar arrays,

    A. M. Hassan, H. H. Li and Y. Chen, "Hardware implementation of echo state networks using memristor double crossbar arrays," 2017 International Joint Conference on Neural Networks (IJCNN), Anchorage, AK, USA, pp. 2171-2177, 2017

  37. [37]

    Echo state graph neural networks with analogue random resistive memory arrays,

    S. Wang et al., “Echo state graph neural networks with analogue random resistive memory arrays,” Nat. Mach. Intell., Vol. 5, 2023

  38. [38]

    Memristor based circuit design for liquid state machine verified with temporal classification,

    A. Henderson, C. Yakopcic, S. Harbour and T. M. Taha, "Memristor based circuit design for liquid state machine verified with temporal classification," 2022 International Joint Conference on Neural Networks (IJCNN), Padua, Italy, 2022, pp. 1-9

  39. [39]

    LSMR: Synergy randomness in liquid state machine and RRAM-based analog-digital accelerator,

    N. Lin et al., “LSMR: Synergy randomness in liquid state machine and RRAM-based analog-digital accelerator,” Proc. IEEE/ACM Int. Conf. Computer-Aided Design (ICCAD), pp. 232:1–232:9, 2024

  40. [40]

    Resistive memory-based zero-shot liquid state machine for multimodal event data learning,

    N. Lin et al., "Resistive memory-based zero-shot liquid state machine for multimodal event data learning," Nature Computational Science, Vol. 5, pp. 37–47, 2025

  41. [41]

    Nonmasking-based reservoir computing with a single dynamic memristor for image recognition,

    X. Wu et al., "Nonmasking-based reservoir computing with a single dynamic memristor for image recognition," Nonlinear Dynamics, Vol. 112, No. 8, pp. 6663–6678, March 2024

  42. [42]

    2D Reconfigurable Memory Device Enabled by Defect Engineering for Multifunctional Neuromorphic Computing,

    Y. Xia et al., “2D Reconfigurable Memory Device Enabled by Defect Engineering for Multifunctional Neuromorphic Computing,” Advanced Materials, Vol. 36, Art. no. 2403785, 2024

  43. [43]

    Reservoir Computing Using Diffusive Memristors,

    R. Midya et al., “Reservoir Computing Using Diffusive Memristors,” Advanced Intelligent Systems, Vol. 1, p. 1900084, 2019

  44. [44]

    Preprocessing Methods for Memristive Reservoir Computing for Image Recognition,

    R. Daniels et al., "Preprocessing Methods for Memristive Reservoir Computing for Image Recognition," Proceedings of the IEEE International Conference on Metrology for eXtended Reality, Artificial Intelligence and Neural Engineering, pp. 1-6, October 2025

  45. [45]

    ISAAC: A Convolutional Neural Network Accelerator with In-Situ Analog Arithmetic in Crossbars,

    A. Shafiee et al., "ISAAC: A Convolutional Neural Network Accelerator with In-Situ Analog Arithmetic in Crossbars," 2016 ACM/IEEE 43rd Annual International Symposium on Computer Architecture (ISCA), Seoul, Korea (South), 2016, pp. 14-26

  46. [46]

    Reservoir-based convolution,

    Y. Tanaka and H. Tamukoh, "Reservoir-based convolution," Special Section on Nonlinear Science Workshop on the Journal, Nonlinear Theory and Its Applications, IEICE, Vol. 13, No. 2, pp. 397–402, 2022

  47. [47]

    New results on recurrent network training: unifying the algorithms and accelerating convergence,

    A. F. Atiya and A. G. Parlos, "New results on recurrent network training: unifying the algorithms and accelerating convergence," IEEE Transactions on Neural Networks, Vol. 11, No. 3, pp. 697-709, May 2000

  48. [48]

    Gradient-based learning applied to document recognition,

    Y. Lecun, L. Bottou, Y. Bengio and P. Haffner, "Gradient-based learning applied to document recognition," Proceedings of the IEEE, Vol. 86, No. 11, pp. 2278-2324, November 1998

  49. [49]

    The neural network zoo,

    Asimov Institute, “The neural network zoo,” 2019. [Online]. Available: https://www.asimovinstitute.org/neural-network-zoo/. Accessed: November 26, 2025

  50. [50]

    Flexible TiO2-WO3-x hybrid memristor with enhanced linearity and synaptic plasticity for precise weight tuning in neuromorphic computing,

    J. Pan et al., “Flexible TiO2-WO3-x hybrid memristor with enhanced linearity and synaptic plasticity for precise weight tuning in neuromorphic computing,” npj Flexible Electronics, Vol. 8, Art. No. 70, Oct. 2024