pith. machine review for the scientific record. sign in

arxiv: 2602.07200 · v2 · submitted 2026-02-06 · 💻 cs.CR · cs.AI

Recognition: no theorem link

BadSNN: Backdoor Attacks on Spiking Neural Networks via Adversarial Spiking Neuron

Authors on Pith no claims yet

Pith reviewed 2026-05-16 06:28 UTC · model grok-4.3

classification 💻 cs.CR cs.AI
keywords backdoor attacksspiking neural networksLIF neuronsadversarial hyperparametersneuromorphic securitytrigger optimizationdata poisoning
0
0 comments X

The pith

Spiking neural networks can be backdoored by deliberately varying the membrane threshold and time constant of their spiking neurons.

A machine-rendered reading of the paper's core claim, the machinery that carries it, and where it could break.

The paper proposes BadSNN, which injects backdoor behavior into spiking neural networks by modifying the membrane potential threshold and membrane time constant inside the Leaky Integrate-and-Fire neuron model. An adversary poisons training with optimized triggers so the network misbehaves only on those triggers while clean accuracy stays high. This approach is shown to beat standard data-poisoning backdoor attacks across multiple datasets and SNN architectures. It also resists common mitigation methods such as fine-tuning and pruning. A reader should care because SNNs are promoted for low-power edge devices; if the claim holds, their security depends on more than weight integrity.

Core claim

BadSNN embeds backdoor behavior in spiking neural networks by deliberately varying the hyperparameters of the spiking neurons, specifically the membrane potential threshold and membrane time constant in the Leaky Integrate-and-Fire model, combined with optimized triggers, achieving superior attack success rates compared to data poisoning methods while preserving clean accuracy.

What carries the argument

Adversarial variation of spiking-neuron hyperparameters (membrane threshold and time constant) inside the LIF model to create a persistent trigger-activated backdoor.

Load-bearing premise

Deliberate changes to the membrane potential threshold and time constant of spiking neurons can embed a reliable backdoor that activates only on specific triggers without reducing accuracy on clean inputs or being removed by standard defenses.

What would settle it

Train an SNN with the proposed hyperparameter variations on MNIST or CIFAR-10, then measure whether clean accuracy remains above 90 percent while attack success rate on inputs containing the optimized trigger exceeds 90 percent, and whether fine-tuning or neuron pruning drops the attack success rate below 20 percent.

Figures

Figures reproduced from arXiv: 2602.07200 by Abdullah Arafat Miah, Kevin Vu, Yu Bi.

Figure 1
Figure 1. Figure 1: Effect of membrane potential threshold ( [PITH_FULL_IMAGE:figures/full_fig_p003_1.png] view at source ↗
Figure 2
Figure 2. Figure 2: Overview of the proposed BadSNN. BadSNN has three major steps: Backdoor Training, Trigger Optimization, and Inference. During Backdoor Training, the adversary ma￾nipulates the hyperparameters of the spiking neurons in the target clean SNN to perform dual spike learning. In the Trigger Optimization step, the weights of a trigger generation model are optimized to generate minimally perceptible trigger pertur… view at source ↗
Figure 3
Figure 3. Figure 3: CA/ASR heatmaps for different V t thr and τ t . in the Table I. We also report ASRo for CIFAR-10 and GTSRB, and ASRp for CIFAR-100. Pruning-based defenses (CLP and ANP). CLP shows in￾effectiveness against most attacks, including BadSNN. ANP degrades the ASR to a meaningful extent for all baseline attacks but fails against BadSNN across all three datasets. One interesting observation is that for BadSNN the … view at source ↗
Figure 4
Figure 4. Figure 4: Attack effectiveness analysis for different poisoning [PITH_FULL_IMAGE:figures/full_fig_p008_4.png] view at source ↗
read the original abstract

Spiking Neural Networks (SNNs) are energy-efficient counterparts of Deep Neural Networks (DNNs) with high biological plausibility, as information is transmitted through temporal spiking patterns. The core element of an SNN is the spiking neuron, which converts input data into spikes following the Leaky Integrate-and-Fire (LIF) neuron model. This model includes several important hyperparameters, such as the membrane potential threshold and membrane time constant. Both the DNNs and SNNs have proven to be exploitable by backdoor attacks, where an adversary can poison the training dataset with malicious triggers and force the model to behave in an attacker-defined manner. Yet, how an adversary can exploit the unique characteristics of SNNs for backdoor attacks remains underexplored. In this paper, we propose \textit{BadSNN}, a novel backdoor attack on spiking neural networks that exploits hyperparameter variations of spiking neurons to inject backdoor behavior into the model. We further propose a trigger optimization process to achieve better attack performance while making trigger patterns less perceptible. \textit{BadSNN} demonstrates superior attack performance on various datasets and architectures, as well as compared with state-of-the-art data poisoning-based backdoor attacks and robustness against common backdoor mitigation techniques. Codes can be found at https://github.com/SiSL-URI/BadSNN.

Editorial analysis

A structured set of objections, weighed in public.

Desk editor's note, referee report, simulated authors' rebuttal, and a circularity audit. Tearing a paper down is the easy half of reading it; the pith above is the substance, this is the friction.

Referee Report

2 major / 2 minor

Summary. The paper proposes BadSNN, a backdoor attack on Spiking Neural Networks that injects malicious behavior by deliberately varying the membrane potential threshold and membrane time constant of Leaky Integrate-and-Fire neurons during training, combined with a separate trigger optimization step to improve attack success while reducing trigger perceptibility. It claims superior attack success rates and clean accuracy preservation across multiple datasets and SNN architectures relative to prior data-poisoning backdoor methods, plus resistance to common mitigation techniques.

Significance. If the empirical claims hold after proper controls, the work would identify a hyperparameter-specific attack surface in SNNs that is not directly transferable from DNN backdoor literature, with implications for the security of neuromorphic hardware. The public code release is a clear strength for reproducibility.

major comments (2)
  1. [Experiments (likely §4)] The experimental evaluation lacks an ablation that applies the identical optimized triggers under standard fixed LIF hyperparameters (threshold and time constant) as a control. Without this comparison, performance and robustness gains cannot be confidently attributed to the claimed adversarial spiking neuron construction rather than to trigger engineering alone.
  2. [Proposed Method (likely §3)] The method section does not specify the precise training procedure for embedding the backdoor via hyperparameter variation (e.g., whether thresholds are optimized jointly with weights, frozen after poisoning, or adjusted post-training) nor how this interacts with the standard data-poisoning loss.
minor comments (2)
  1. [Abstract] Abstract sentence 'demonstrates superior attack performance on various datasets and architectures, as well as compared with state-of-the-art' is grammatically unclear and should be rephrased.
  2. [Figures] Figure captions and axis labels for attack success rate and clean accuracy plots should explicitly state the number of runs and error bars used.

Simulated Author's Rebuttal

2 responses · 0 unresolved

We thank the referee for the constructive feedback. We address each major comment below and will revise the manuscript to strengthen the presentation and experimental controls.

read point-by-point responses
  1. Referee: The experimental evaluation lacks an ablation that applies the identical optimized triggers under standard fixed LIF hyperparameters (threshold and time constant) as a control. Without this comparison, performance and robustness gains cannot be confidently attributed to the claimed adversarial spiking neuron construction rather than to trigger engineering alone.

    Authors: We agree that this control ablation is necessary to isolate the contribution of adversarial LIF hyperparameter variation from trigger optimization alone. In the revised manuscript we will add the requested experiment: the same optimized triggers will be evaluated under fixed standard LIF hyperparameters (threshold and time constant) on the same architectures and datasets, with results reported alongside the original BadSNN results. revision: yes

  2. Referee: The method section does not specify the precise training procedure for embedding the backdoor via hyperparameter variation (e.g., whether thresholds are optimized jointly with weights, frozen after poisoning, or adjusted post-training) nor how this interacts with the standard data-poisoning loss.

    Authors: We apologize for the omission. In BadSNN the membrane threshold and time constant are treated as learnable parameters and optimized jointly with the synaptic weights during the single backdoor training stage. The overall loss is the sum of the standard cross-entropy loss on clean data and the backdoor loss on poisoned samples; gradients flow through both the weights and the neuron hyperparameters. The hyperparameters are not frozen after poisoning nor adjusted post-training. We will expand Section 3 with a dedicated subsection that formally describes this joint optimization procedure and its interaction with the data-poisoning objective. revision: yes

Circularity Check

0 steps flagged

Empirical construction with no derivation chain or self-referential reductions

full rationale

The paper proposes BadSNN as an empirical backdoor attack method that varies LIF neuron hyperparameters (threshold and time constant) combined with a separate trigger optimization step, then reports experimental results on attack success, clean accuracy, and mitigation resistance across datasets and architectures. No equations, derivations, or first-principles claims appear in the provided text that would reduce any performance metric to a fitted parameter defined by the same metric or to a self-citation chain. The central claims rest on experimental outcomes rather than any mathematical reduction to inputs, making the work self-contained as a constructive attack design without circularity.

Axiom & Free-Parameter Ledger

0 free parameters · 0 axioms · 0 invented entities

Based on the abstract alone, no explicit free parameters, axioms, or invented entities are introduced beyond the standard Leaky Integrate-and-Fire neuron model and conventional backdoor poisoning assumptions already present in the cited literature.

pith-pipeline@v0.9.0 · 5548 in / 1179 out tokens · 24843 ms · 2026-05-16T06:28:42.642350+00:00 · methodology

discussion (0)

Sign in with ORCID, Apple, or X to comment. Anyone can read and Pith papers without signing in.

Reference graph

Works this paper leans on

65 extracted references · 65 canonical work pages · 3 internal anchors

  1. [1]

    Imagenet: A large-scale hierarchical image database

    Jia Deng, Wei Dong, Richard Socher, Li-Jia Li, Kai Li, and Li Fei- Fei. Imagenet: A large-scale hierarchical image database. In2009 IEEE conference on computer vision and pattern recognition, pages 248–255. Ieee, 2009

  2. [2]

    Fast r-cnn

    Ross Girshick. Fast r-cnn. InProceedings of the IEEE international conference on computer vision, pages 1440–1448, 2015

  3. [3]

    Bert: Pre-training of deep bidirectional transformers for language un- derstanding

    Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. Bert: Pre-training of deep bidirectional transformers for language un- derstanding. InProceedings of the 2019 conference of the North American chapter of the association for computational linguistics: human language technologies, volume 1 (long and short papers), pages 4171–4186, 2019

  4. [4]

    Attention is all you need.Advances in neural information processing systems, 30, 2017

    Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N Gomez, Łukasz Kaiser, and Illia Polosukhin. Attention is all you need.Advances in neural information processing systems, 30, 2017

  5. [5]

    The carbon impact of artificial intelligence, 2020

    Payal Dhar. The carbon impact of artificial intelligence, 2020

  6. [6]

    Networks of spiking neurons: The third generation of neural network models.Neural Networks, 10(9):1659–1671, 1997

    Wolfgang Maass. Networks of spiking neurons: The third generation of neural network models.Neural Networks, 10(9):1659–1671, 1997

  7. [7]

    Spiking neural networks

    Sayantani Ghosh-Dastidar and Hojjat Adeli. Spiking neural networks. International Journal of Neural Systems, 19(4):295–308, 2009

  8. [8]

    Towards spike-based machine intelligence with neuromorphic computing.Nature, 575:607–617, 2019

    Kaushik Roy, Akhilesh Jaiswal, and Priyadarshini Panda. Towards spike-based machine intelligence with neuromorphic computing.Nature, 575:607–617, 2019

  9. [9]

    Spikingjelly: An open-source machine learning infrastructure platform for spike-based intelligence.Science Advances, 9(42):eadi1480, 2023

    Wei Fang, Yanqi Chen, Jianhao Ding, Zhaofei Yu, et al. Spikingjelly: An open-source machine learning infrastructure platform for spike-based intelligence.Science Advances, 9(42):eadi1480, 2023

  10. [10]

    Spike-thrift: Towards energy-efficient deep spiking neural networks by limiting spiking activity via attention-guided compression

    Souvik Kundu, Gourav Datta, Massoud Pedram, and Peter A Beerel. Spike-thrift: Towards energy-efficient deep spiking neural networks by limiting spiking activity via attention-guided compression. InProceed- ings of the IEEE/CVF winter conference on applications of computer vision, pages 3953–3962, 2021

  11. [11]

    Training deep spik- ing neural networks using backpropagation.Frontiers in neuroscience, 10:508, 2016

    Jun Haeng Lee, Tobi Delbruck, and Michael Pfeiffer. Training deep spik- ing neural networks using backpropagation.Frontiers in neuroscience, 10:508, 2016

  12. [12]

    Carsnn: An efficient spiking neural network for event-based autonomous cars on the loihi neuromorphic research processor

    Alberto Viale, Alberto Marchisio, Maurizio Martina, Guido Masera, and Muhammad Shafique. Carsnn: An efficient spiking neural network for event-based autonomous cars on the loihi neuromorphic research processor. In2021 International Joint Conference on Neural Networks (IJCNN), pages 1–10. IEEE, 2021

  13. [13]

    Evolv- ing spiking neural networks for audiovisual information processing

    Simei Gomes Wysoski, Lubica Benuskova, and Nikola Kasabov. Evolv- ing spiking neural networks for audiovisual information processing. Neural Networks, 23(7):819–835, 2010

  14. [14]

    Classifying neuromorphic datasets with tempotron and spike timing dependent plasticity

    Laxmi R Iyer and Yansong Chua. Classifying neuromorphic datasets with tempotron and spike timing dependent plasticity. In2020 interna- tional joint conference on neural networks (IJCNN), pages 1–8. IEEE, 2020

  15. [15]

    Spatio- temporal backpropagation for training high-performance spiking neural networks.Frontiers in neuroscience, 12:331, 2018

    Yujie Wu, Lei Deng, Guoqi Li, Jun Zhu, and Luping Shi. Spatio- temporal backpropagation for training high-performance spiking neural networks.Frontiers in neuroscience, 12:331, 2018

  16. [16]

    Deep spiking neural network: Energy efficiency through time based coding

    Bing Han and Kaushik Roy. Deep spiking neural network: Energy efficiency through time based coding. InEuropean conference on computer vision, pages 388–404. Springer, 2020

  17. [17]

    Going deeper in spiking neural networks: Vgg and residual architectures.Frontiers in neuroscience, 13:95, 2019

    Abhronil Sengupta, Yuting Ye, Robert Wang, Chiao Liu, and Kaushik Roy. Going deeper in spiking neural networks: Vgg and residual architectures.Frontiers in neuroscience, 13:95, 2019

  18. [18]

    BadNets: Identifying Vulnerabilities in the Machine Learning Model Supply Chain

    Tianyu Gu, Brendan Dolan-Gavitt, and Siddharth Garg. BadNets: Identifying vulnerabilities in the machine learning model supply chain. arXiv preprint arXiv:1708.06733, 2017

  19. [19]

    Badnets: Evaluating backdooring attacks on deep neural networks.IEEE Access, 7:47230–47244, 2019

    Tianyu Gu, Kang Liu, Brendan Dolan-Gavitt, and Siddharth Garg. Badnets: Evaluating backdooring attacks on deep neural networks.IEEE Access, 7:47230–47244, 2019

  20. [20]

    Targeted Backdoor Attacks on Deep Learning Systems Using Data Poisoning

    Xinyun Chen, Chang Liu, Bo Li, Kimberly Lu, and Dawn Song. Tar- geted backdoor attacks on deep learning systems using data poisoning. arXiv preprint arXiv:1712.05526, 2017

  21. [21]

    Wanet–imperceptible warping-based back- door attack.arXiv preprint arXiv:2102.10369, 2021

    Anh Nguyen and Anh Tran. Wanet–imperceptible warping-based back- door attack.arXiv preprint arXiv:2102.10369, 2021

  22. [22]

    Strip: A defence against trojan attacks on deep neural networks

    Yansong Gao, Change Xu, Derui Wang, Shiping Chen, Damith C Ranasinghe, and Surya Nepal. Strip: A defence against trojan attacks on deep neural networks. InProceedings of the 35th annual computer security applications conference, pages 113–125, 2019

  23. [23]

    Adversarial neuron pruning purifies backdoored deep models.Advances in Neural Information Processing Systems, 34:16913–16925, 2021

    Dongxian Wu and Yisen Wang. Adversarial neuron pruning purifies backdoored deep models.Advances in Neural Information Processing Systems, 34:16913–16925, 2021

  24. [24]

    Sneaky spikes: Uncovering stealthy backdoor attacks in spiking neural networks with neuromorphic data.arXiv preprint arXiv:2302.06279, 2023

    Gorka Abad, Oguzhan Ersoy, Stjepan Picek, and Aitor Urbieta. Sneaky spikes: Uncovering stealthy backdoor attacks in spiking neural networks with neuromorphic data.arXiv preprint arXiv:2302.06279, 2023

  25. [25]

    A 128×128 120 db 15µs latency asynchronous temporal contrast vision sensor

    Patrick Lichtsteiner, Christoph Posch, and Tobi Delbr ¨uck. A 128×128 120 db 15µs latency asynchronous temporal contrast vision sensor. IEEE Journal of Solid-State Circuits, 43(2):566–576, 2008

  26. [26]

    Flashy backdoor: Real-world environment backdoor attack on snns with dvs cameras, 2024

    Roberto Ria ˜no, Gorka Abad, Stjepan Picek, and Aitor Urbieta. Flashy backdoor: Real-world environment backdoor attack on snns with dvs cameras, 2024

  27. [27]

    Incorporating learnable membrane time constant to enhance learning of spiking neural networks

    Wei Fang, Zhaofei Yu, Yanqi Chen, Timoth´ee Masquelier, Tiejun Huang, and Yonghong Tian. Incorporating learnable membrane time constant to enhance learning of spiking neural networks. InProceedings of the IEEE/CVF international conference on computer vision, pages 2661– 2671, 2021

  28. [28]

    Lotus: Evasive and resilient backdoor attacks through sub-partitioning

    Siyuan Cheng, Guanhong Tao, Yingqi Liu, Guangyu Shen, Shengwei An, Shiwei Feng, Xiangzhe Xu, Kaiyuan Zhang, Shiqing Ma, and Xiangyu Zhang. Lotus: Evasive and resilient backdoor attacks through sub-partitioning. InProceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages 24798–24809, 2024

  29. [29]

    Bppattack: Stealthy and efficient trojan attacks against deep neural networks via image quantization and contrastive adversarial learning

    Zhenting Wang, Juan Zhai, and Shiqing Ma. Bppattack: Stealthy and efficient trojan attacks against deep neural networks via image quantization and contrastive adversarial learning. InProceedings of the IEEE/CVF conference on computer vision and pattern recognition, pages 15074–15084, 2022

  30. [30]

    A dual stealthy backdoor: From both spatial and frequency perspectives

    Yudong Gao, Honglong Chen, Peng Sun, Junjian Li, Anqing Zhang, Zhibo Wang, and Weifeng Liu. A dual stealthy backdoor: From both spatial and frequency perspectives. InProceedings of the AAAI Conference on Artificial Intelligence, volume 38, pages 1851–1859, 2024

  31. [31]

    Noiseattack: An evasive sample-specific multi-targeted backdoor attack through white gaussian noise.arXiv preprint arXiv:2409.02251, 2024

    Abdullah Arafat Miah, Kaan Icer, Resit Sendag, and Yu Bi. Noiseattack: An evasive sample-specific multi-targeted backdoor attack through white gaussian noise.arXiv preprint arXiv:2409.02251, 2024

  32. [32]

    Backdoor attacks on vision transformers.arXiv preprint arXiv:2206.08477, 2022

    Akshayvarun Subramanya, Aniruddha Saha, Soroush Abbasi Kooh- payegani, Ajinkya Tejankar, and Hamed Pirsiavash. Backdoor attacks on vision transformers.arXiv preprint arXiv:2206.08477, 2022

  33. [33]

    Baddet: Backdoor attacks on object detection

    Shih-Han Chan, Yinpeng Dong, Jun Zhu, Xiaolu Zhang, and Jun Zhou. Baddet: Backdoor attacks on object detection. InEuropean conference on computer vision, pages 396–412. Springer, 2022

  34. [34]

    Badnl: Backdoor attacks against nlp models with semantic-preserving improvements

    Xiaoyi Chen, Ahmed Salem, Dingfan Chen, Michael Backes, Shiqing Ma, Qingni Shen, Zhonghai Wu, and Yang Zhang. Badnl: Backdoor attacks against nlp models with semantic-preserving improvements. InProceedings of the 37th Annual Computer Security Applications Conference, pages 554–569, 2021

  35. [35]

    Hidden backdoors in human-centric language models

    Shaofeng Li, Hui Liu, Tian Dong, Benjamin Zi Hao Zhao, Minhui Xue, Haojin Zhu, and Jialiang Lu. Hidden backdoors in human-centric language models. InProceedings of the 2021 ACM SIGSAC conference on computer and communications security, pages 3123–3140, 2021

  36. [36]

    Exploring clean label backdoor attacks and defense in language models

    Shuai Zhao, Luu Anh Tuan, Jie Fu, Jinming Wen, and Weiqi Luo. Exploring clean label backdoor attacks and defense in language models. IEEE/ACM transactions on audio, speech, and language processing, 32:3014–3024, 2024

  37. [37]

    Prompt stealing attacks against large language models.arXiv preprint arXiv:2402.12959, 2024

    Zeyang Sha and Yang Zhang. Prompt stealing attacks against large language models.arXiv preprint arXiv:2402.12959, 2024

  38. [38]

    Exploiting the vulnerability of large language models via defense-aware architectural backdoor.arXiv preprint arXiv:2409.01952, 2024

    Abdullah Arafat Miah and Yu Bi. Exploiting the vulnerability of large language models via defense-aware architectural backdoor.arXiv preprint arXiv:2409.01952, 2024

  39. [39]

    Graph backdoor

    Zhaohan Xi, Ren Pang, Shouling Ji, and Ting Wang. Graph backdoor. In 30th USENIX security symposium (USENIX Security 21), pages 1523– 1540, 2021

  40. [40]

    Rethinking graph backdoor attacks: A distribution-preserving perspective

    Zhiwei Zhang, Minhua Lin, Enyan Dai, and Suhang Wang. Rethinking graph backdoor attacks: A distribution-preserving perspective. InPro- ceedings of the 30th ACM SIGKDD conference on knowledge discovery and data mining, pages 4386–4397, 2024

  41. [41]

    BadImplant: Injection-based Multi-Targeted Graph Backdoor Attack

    Md Nabi Newaz Khan, Abdullah Arafat Miah, and Yu Bi. Multi-targeted graph backdoor attack.arXiv preprint arXiv:2601.15474, 2026

  42. [42]

    Backdoor attacks and defenses targeting multi- domain ai models: A comprehensive review.ACM Computing Surveys, 57(4):1–35, 2024

    Shaobo Zhang, Yimeng Pan, Qin Liu, Zheng Yan, Kim-Kwang Raymond Choo, and Guojun Wang. Backdoor attacks and defenses targeting multi- domain ai models: A comprehensive review.ACM Computing Surveys, 57(4):1–35, 2024

  43. [43]

    Trojaning attack on neural networks

    Yingqi Liu, Shiqing Ma, Yousra Aafer, Wen-Chuan Lee, Juan Zhai, Weihang Wang, and Xiangyu Zhang. Trojaning attack on neural networks. In25th Annual Network And Distributed System Security Symposium (NDSS 2018). Internet Soc, 2018

  44. [44]

    Hid- den trigger backdoor attacks

    Anirban Saha, Akshayvarun Subramanya, and Hamed Pirsiavash. Hid- den trigger backdoor attacks. InProceedings of the AAAI Conference on Artificial Intelligence, volume 34, pages 11957–11965, 2020

  45. [45]

    Lira: Learnable, imperceptible and robust backdoor attacks

    Khoa Doan, Yingjie Lao, Weijie Zhao, and Ping Li. Lira: Learnable, imperceptible and robust backdoor attacks. InProceedings of the IEEE/CVF international conference on computer vision, pages 11966– 11976, 2021

  46. [46]

    Anti-backdoor learning: Training clean models on poisoned data

    Yige Li, Xixiang Lyu, Nodens Koren, Lingjuan Lyu, Bo Li, and Xingjun Ma. Anti-backdoor learning: Training clean models on poisoned data. Advances in Neural Information Processing Systems, 34:14900–14912, 2021

  47. [47]

    Neural cleanse: Identifying and miti- gating backdoor attacks in neural networks

    Bolun Wang, Yuanshun Yao, Shawn Shan, Huiying Li, Bimal Viswanath, Haitao Zheng, and Ben Y Zhao. Neural cleanse: Identifying and miti- gating backdoor attacks in neural networks. In2019 IEEE symposium on security and privacy (SP), pages 707–723. IEEE, 2019

  48. [48]

    Data-free backdoor removal based on channel lipschitzness

    Runkai Zheng, Rongjun Tang, Jianze Li, and Li Liu. Data-free backdoor removal based on channel lipschitzness. InEuropean Conference on Computer Vision, pages 175–191. Springer, 2022

  49. [49]

    Neural attention distillation: Erasing backdoor triggers from deep neural networks.arXiv preprint arXiv:2101.05930, 2021

    Yige Li, Xixiang Lyu, Nodens Koren, Lingjuan Lyu, Bo Li, and Xingjun Ma. Neural attention distillation: Erasing backdoor triggers from deep neural networks.arXiv preprint arXiv:2101.05930, 2021

  50. [50]

    Unveiling and mitigating backdoor vulnerabilities based on unlearning weight changes and backdoor activeness.Advances in Neural Information Processing Systems, 37:42097–42122, 2024

    Weilin Lin, Li Liu, Shaokui Wei, Jianze Li, and Hui Xiong. Unveiling and mitigating backdoor vulnerabilities based on unlearning weight changes and backdoor activeness.Advances in Neural Information Processing Systems, 37:42097–42122, 2024

  51. [51]

    Sampdetox: Black-box backdoor defense via perturbation-based sample detoxification.Advances in Neural Information Processing Systems, 37:121236–121264, 2024

    Yanxin Yang, Chentao Jia, DengKe Yan, Ming Hu, Tianlin Li, Xiaofei Xie, Xian Wei, and Mingsong Chen. Sampdetox: Black-box backdoor defense via perturbation-based sample detoxification.Advances in Neural Information Processing Systems, 37:121236–121264, 2024

  52. [52]

    Lite-bd: A lightweight black-box backdoor defense via reviving multi-stage image transformations.arXiv preprint arXiv:2602.07197, 2026

    Abdullah Arafat Miah and Yu Bi. Lite-bd: A lightweight black-box backdoor defense via reviving multi-stage image transformations.arXiv preprint arXiv:2602.07197, 2026

  53. [53]

    Black-box backdoor defense via zero-shot image purification.Advances in Neural Information Processing Systems, 36:57336–57366, 2023

    Yucheng Shi, Mengnan Du, Xuansheng Wu, Zihan Guan, Jin Sun, and Ninghao Liu. Black-box backdoor defense via zero-shot image purification.Advances in Neural Information Processing Systems, 36:57336–57366, 2023

  54. [54]

    Sneaky spikes: Uncovering stealthy backdoor attacks in spiking neural networks with neuromorphic data

    Gorka Abad, Oguzhan Ersoy, Stjepan Picek, and Aitor Urbieta. Sneaky spikes: Uncovering stealthy backdoor attacks in spiking neural networks with neuromorphic data. InNDSS, 2024

  55. [55]

    Data-poisoning-based backdoor attack framework against supervised learning rules of spiking neural networks.arXiv preprint arXiv:2409.15670, 2024

    Shuo Jin et al. Data-poisoning-based backdoor attack framework against supervised learning rules of spiking neural networks.arXiv preprint arXiv:2409.15670, 2024

  56. [56]

    Networks of spiking neurons: the third generation of neural network models.Neural networks, 10(9):1659–1671, 1997

    Wolfgang Maass. Networks of spiking neurons: the third generation of neural network models.Neural networks, 10(9):1659–1671, 1997

  57. [57]

    Seenn: Towards temporal spiking early exit neural networks.Advances in Neural Information Processing Systems, 36:63327–63342, 2023

    Yuhang Li, Tamar Geller, Youngeun Kim, and Priyadarshini Panda. Seenn: Towards temporal spiking early exit neural networks.Advances in Neural Information Processing Systems, 36:63327–63342, 2023

  58. [58]

    Deepfool: a simple and accurate method to fool deep neural networks

    Seyed-Mohsen Moosavi-Dezfooli, Alhussein Fawzi, and Pascal Frossard. Deepfool: a simple and accurate method to fool deep neural networks. InProceedings of the IEEE conference on computer vision and pattern recognition, pages 2574–2582, 2016

  59. [59]

    U-net: Convo- lutional networks for biomedical image segmentation

    Olaf Ronneberger, Philipp Fischer, and Thomas Brox. U-net: Convo- lutional networks for biomedical image segmentation. InInternational Conference on Medical image computing and computer-assisted inter- vention, pages 234–241. Springer, 2015

  60. [60]

    Learning multiple layers of features from tiny images

    Alex Krizhevsky, Geoffrey Hinton, et al. Learning multiple layers of features from tiny images. 2009

  61. [61]

    The german traffic sign recognition benchmark: a multi-class classifica- tion competition

    Johannes Stallkamp, Marc Schlipsing, Jan Salmen, and Christian Igel. The german traffic sign recognition benchmark: a multi-class classifica- tion competition. InThe 2011 international joint conference on neural networks, pages 1453–1460. IEEE, 2011

  62. [62]

    Converting static image datasets to spiking neuromorphic datasets using saccades.Frontiers in neuroscience, 9:437, 2015

    Garrick Orchard, Ajinkya Jayawant, Gregory K Cohen, and Nitish Thakor. Converting static image datasets to spiking neuromorphic datasets using saccades.Frontiers in neuroscience, 9:437, 2015

  63. [63]

    Cohen, and Nitish Thakor

    Garrick Orchard, Ajinkya Jayawant, Gregory K. Cohen, and Nitish Thakor. Converting static image datasets to spiking neuromorphic datasets using saccades.Frontiers in Neuroscience, 9:437, 2015

  64. [64]

    Differentiable spike: Rethinking gradient-descent for training spiking neural networks.Advances in neural information processing systems, 34:23426–23439, 2021

    Yuhang Li, Yufei Guo, Shanghang Zhang, Shikuang Deng, Yongqing Hai, and Shi Gu. Differentiable spike: Rethinking gradient-descent for training spiking neural networks.Advances in neural information processing systems, 34:23426–23439, 2021

  65. [65]

    Clean-label backdoor attacks

    Alexander Turner, Dimitris Tsipras, and Aleksander Madry. Clean-label backdoor attacks. 2018