pith. machine review for the scientific record. sign in

arxiv: 2604.23231 · v1 · submitted 2026-04-25 · 💻 cs.CR · cs.AI

Recognition: unknown

Toward Polymorphic Backdoor against Semantic Communication via Intensity-Based Poisoning

Gaolei Li, Jianhua Li, Jun Wu, Kai Zhou, Mingzhe Chen, Xiao Yang, Yuni Lai

Pith reviewed 2026-05-08 08:16 UTC · model grok-4.3

classification 💻 cs.CR cs.AI
keywords semantic communicationbackdoor attackpolymorphic backdoorpoisoning attacktrigger intensityadversarial machine learningprovable defense
0
0 comments X

The pith

Semantic communication systems can be backdoored so that trigger intensity alone selects among multiple malicious output targets.

A machine-rendered reading of the paper's core claim, the machinery that carries it, and where it could break.

The paper introduces SemBugger, a polymorphic backdoor for semantic communication that overcomes the single-target limit of prior attacks. It poisons training data with triggers at multiple intensity levels and trains the system using a hierarchical malicious loss, so the shared knowledge produces different predetermined outputs depending on the strength of the trigger in a new input. The same model keeps normal transmission quality for clean samples. A separate defense adds controlled noise to inputs and supplies a theoretical lower bound showing it limits the attack. Experiments on varied SC models and datasets confirm the attack succeeds at high rates while the defense reduces its impact.

Core claim

SemBugger realizes a polymorphic SC backdoor through a multi-effect poisoning-training framework that introduces graded-intensity triggers to poison data and optimizes the system with hierarchical malicious loss; the resulting shared knowledge dynamically maps trigger strength to distinct target outputs while preserving transmission fidelity for benign inputs.

What carries the argument

Graded-intensity triggers paired with hierarchical malicious loss in the poisoning framework, which lets the trained SC knowledge adapt its output according to observed trigger strength.

If this is right

  • A single backdoored SC model can support multiple distinct attack targets across heterogeneous downstream tasks.
  • Attackers can maintain high success rates on malicious inputs while the system continues to function normally on clean data.
  • The same poisoning approach yields measurable attack efficacy on several standard SC architectures and benchmark datasets.
  • Adding strategic noise to inputs provides a defense whose worst-case reduction in attack success is bounded by a formal lower bound.
  • Polymorphic behavior makes the backdoor harder to detect than single-target versions because normal performance is unchanged.

Where Pith is reading between the lines

These are editorial extensions of the paper, not claims the author makes directly.

  • The intensity-based control mechanism could be tested as a general way to embed multiple behaviors in any poisoned neural model used for structured outputs.
  • Real-world SC deployments would need input sanitization that accounts for possible continuous variation in trigger strength rather than fixed patterns.
  • Defenders might combine the noise mechanism with intensity detection to further raise the cost of successful polymorphic attacks.
  • The approach raises the question of whether similar graded poisoning can be applied to other communication modalities that rely on learned representations.

Load-bearing premise

The shared knowledge in a semantic communication model can be made to respond with different outputs to different trigger intensities without any drop in accuracy or fidelity on normal inputs.

What would settle it

A test in which increasing trigger intensity on backdoored inputs produces no measurable shift in output distribution toward the intended distinct targets, or in which the defense noise fails to keep attack success rate below the claimed theoretical bound.

Figures

Figures reproduced from arXiv: 2604.23231 by Gaolei Li, Jianhua Li, Jun Wu, Kai Zhou, Mingzhe Chen, Xiao Yang, Yuni Lai.

Figure 1
Figure 1. Figure 1: Illustration of backdoor attacks against SC systems. Adversaries embed specific triggers into the input samples at the transmitter side (i.e., data poisoning), inducing the system to deliver poisoned data toward predetermined malicious out￾puts while retaining normal transmission efficacy for benign samples. These hostile outputs may cause abnormal execution results in downstream tasks and undermine system… view at source ↗
Figure 2
Figure 2. Figure 2: Illustration of the proposed SemBugger. 1) Selected victim samples are injected with multi-intensity triggers crafted by the trigger generator to construct a multi-dimensional poisoned dataset. 2) By training with hierarchical loss, the system learns to transform inputs with varying-intensity triggers into differentiated malicious targets, while preserving functionality on benign data. 3) Adversaries achie… view at source ↗
Figure 3
Figure 3. Figure 3: Illustration of defense strategy against SemBugger. It is deployed during the operational phase of the SC system. Before data is input into the system, smoothed noise is added to invalidate potential triggers, which guarantees normal output at the receiver end without affecting regular data transmission. the transmission process. Considering all the image pixels are in [0, 1], we have the theorem that the … view at source ↗
Figure 4
Figure 4. Figure 4: Test results of ASRs across SNRs (ASR: %; SNR: dB). Please cf. Sec. V-B for detailed explanations view at source ↗
Figure 5
Figure 5. Figure 5: Test results of clean data PSNRs across SNRs (SNR, PSNR: dB). Please cf. Sec. V-B for detailed explanations. imum level intensity) and baselines view at source ↗
Figure 7
Figure 7. Figure 7: Ablation test results across poisoning rates γ under communication condition SNR = 25 dB (ASR: %; ∆PSNR: dB). Please cf. Sec. V-C for detailed explanations. all datasets and target results. 3) The attack elicits minimal distortion to regular transmissions capability, with ∆PSNR values constrained below 2.39 dB in all configurations. Our evaluations reveal that SemBugger has consistent adversarial effective… view at source ↗
Figure 8
Figure 8. Figure 8: Ablation test results across poisoning rates γ under communication condition SNR = 5 dB (ASR: %; ∆PSNR: dB). Please cf. Sec. V-C for detailed explanations. effectiveness even under relatively low SNR channel condi￾tions. 2) Regarding ∆PSNR, it conveys slight fluctuations as γ increases. CIFAR-10 retains the highest ∆PSNR values across most rates, whereas F-MNIST and ImageNet show greater variability, parti… view at source ↗
read the original abstract

Semantic Communication (SC) backdoor attacks aim to utilize triggers to manipulate the system into producing predetermined outputs via backdoored shared knowledge. Current SC backdoors adopt monomorphic paradigms with single attack target, which suffers from limited attack diversity, efficiency, and flexibility in heterogeneous downstream scenarios. To overcome the limitations, we propose SemBugger, a polymorphic SC backdoor. By dynamically adjusting the trigger intensity, SemBugger finely-grained controls over the SC knowledge to generate diverse malicious results from the system. Specifically, SemBugger is realized through a multi-effect poisoning-training framework. It introduces graded-intensity triggers to poison training data and optimizes SC systems with hierarchical malicious loss. The trained system's knowledge dynamically adapts to trigger intensity in inputs to yield target outputs, all while preserving transmission fidelity for benign samples. Moreover, to augment SC security, we propose a provable robustness defense that resists SemBugger's homogeneous attacks through a controlled noise mechanism. It operates via strategically adding noise in SC inputs, and we formally provide a theoretical lower bound on the defense efficacy. Experiments across diverse SC models and benchmark datasets indicate that SemBugger attains high attack efficacy while maintaining the regular functionality of SC systems. Meanwhile, the designed defense effectively neutralizes SemBugger attacks.

Editorial analysis

A structured set of objections, weighed in public.

Desk editor's note, referee report, simulated authors' rebuttal, and a circularity audit. Tearing a paper down is the easy half of reading it; the pith above is the substance, this is the friction.

Referee Report

3 major / 2 minor

Summary. The paper proposes SemBugger, a polymorphic backdoor attack on semantic communication (SC) systems. Unlike prior monomorphic backdoors limited to single targets, it uses graded-intensity triggers combined with a multi-effect poisoning framework and hierarchical malicious loss to embed multiple distinct malicious outputs that activate based on trigger strength, while preserving benign transmission fidelity. A defense mechanism based on strategic noise addition is introduced, accompanied by a claimed theoretical lower bound on its robustness. Experiments across diverse SC models and benchmark datasets are reported to show high attack efficacy for SemBugger and effective neutralization by the defense.

Significance. If the results and bound hold, the work would be significant for SC security research, as it demonstrates how backdoors can achieve greater flexibility and diversity in heterogeneous downstream tasks, addressing a clear limitation of existing monomorphic approaches. The provable defense element provides a constructive countermeasure with potential to inform practical safeguards in emerging semantic communication systems.

major comments (3)
  1. [§3.2] §3.2: The hierarchical malicious loss is central to achieving polymorphic behavior, yet the weighting scheme across intensity levels and its optimization objective are described at a level that leaves open whether the adaptation occurs dynamically from the data or requires per-scenario tuning of the free parameters (graded intensities and loss weights).
  2. [Table 3] Table 3 (attack success rates): High efficacy is asserted across models, but the table reports point estimates without standard deviations, number of runs, or ablation on intensity selection; this makes it impossible to determine whether the polymorphic property generalizes or reflects post-hoc intensity fitting.
  3. [§5.1] §5.1, Eq. (defense bound): The theoretical lower bound on defense efficacy is load-bearing for the security claim, but the derivation appears to assume homogeneous trigger effects; it is unclear how the bound extends to the graded-intensity case without additional assumptions on the noise distribution relative to trigger strength.
minor comments (2)
  1. [Abstract] The abstract and §1 use 'SC' without spelling out 'Semantic Communication' on first use, which is a minor clarity issue for readers outside the immediate subfield.
  2. [Figure 2] Figure 2 caption could explicitly state the intensity values corresponding to each malicious target to improve reproducibility.

Simulated Author's Rebuttal

3 responses · 0 unresolved

We thank the referee for the constructive and detailed feedback, which helps improve the clarity and rigor of our manuscript on SemBugger. We address each major comment point-by-point below, providing clarifications based on the existing framework and committing to revisions where needed to strengthen the presentation.

read point-by-point responses
  1. Referee: [§3.2] The hierarchical malicious loss is central to achieving polymorphic behavior, yet the weighting scheme across intensity levels and its optimization objective are described at a level that leaves open whether the adaptation occurs dynamically from the data or requires per-scenario tuning of the free parameters (graded intensities and loss weights).

    Authors: We appreciate this observation on the description in §3.2. The graded intensities are selected a priori according to the desired distinct malicious targets for different trigger strengths, and the hierarchical loss weights are assigned proportionally to these intensities within the multi-effect poisoning framework. The joint optimization objective then trains the SC model such that, at inference time, the input trigger intensity dynamically determines the output via the adapted knowledge. While the intensity levels and weights involve scenario-specific selection to ensure polymorphism (as noted in the abstract's emphasis on dynamic adjustment), the adaptation itself is data-driven through the poisoning process rather than requiring runtime tuning. We will revise §3.2 to include the explicit weighting formulation, the optimization objective, and a discussion of parameter selection to eliminate ambiguity. revision: yes

  2. Referee: [Table 3] Table 3 (attack success rates): High efficacy is asserted across models, but the table reports point estimates without standard deviations, number of runs, or ablation on intensity selection; this makes it impossible to determine whether the polymorphic property generalizes or reflects post-hoc intensity fitting.

    Authors: We agree that reporting only point estimates limits the ability to assess variability and generalization. The experiments underlying Table 3 were performed across multiple independent runs on the benchmark datasets and SC models. In the revised manuscript, we will update Table 3 to report mean attack success rates accompanied by standard deviations. We will also add an ablation study analyzing the impact of different intensity level selections, demonstrating that the polymorphic behavior holds across reasonable choices rather than resulting from post-hoc fitting. These additions will be placed in the experimental section with appropriate discussion. revision: yes

  3. Referee: [§5.1] §5.1, Eq. (defense bound): The theoretical lower bound on defense efficacy is load-bearing for the security claim, but the derivation appears to assume homogeneous trigger effects; it is unclear how the bound extends to the graded-intensity case without additional assumptions on the noise distribution relative to trigger strength.

    Authors: The bound in §5.1 is derived for the homogeneous case as the base result, with the defense operating via strategic noise addition to inputs. For the graded-intensity polymorphic setting, the bound extends conservatively by scaling the noise variance to the maximum trigger intensity across levels, which upper-bounds the effect on all weaker triggers under standard assumptions (e.g., additive Gaussian noise). This provides a valid lower bound on efficacy that applies to SemBugger without requiring per-level adjustments. We will expand the derivation in the revised §5.1 to explicitly state these assumptions on the noise distribution and include the step-by-step extension to the graded case, reinforcing the security claim. revision: yes

Circularity Check

0 steps flagged

No significant circularity detected

full rationale

The paper introduces SemBugger as a polymorphic SC backdoor via a multi-effect poisoning framework with graded-intensity triggers and hierarchical malicious loss, plus a defense with a claimed theoretical lower bound on efficacy. No load-bearing derivation, equation, or result reduces by construction to its own inputs, fitted parameters, or self-citation chains; attack efficacy and defense performance are presented as outcomes of the proposed training procedure and experiments across models/datasets rather than tautological redefinitions. The central claims remain independent of the enumerated circularity patterns.

Axiom & Free-Parameter Ledger

2 free parameters · 1 axioms · 1 invented entities

The approach rests on the assumption that poisoning with graded triggers can embed intensity-dependent behaviors in SC models without explicit formalization of how the shared knowledge adapts.

free parameters (2)
  • graded trigger intensities
    Dynamically chosen values that control which malicious output is produced; values are not derived from first principles.
  • hierarchical malicious loss weights
    Parameters balancing multiple attack targets during optimization.
axioms (1)
  • domain assumption Semantic communication models can be successfully poisoned to embed multiple backdoor behaviors via intensity variation while preserving benign performance.
    Invoked in the multi-effect poisoning-training framework description.
invented entities (1)
  • SemBugger no independent evidence
    purpose: Polymorphic backdoor attack framework using intensity-based poisoning.
    Newly proposed method without independent evidence outside the paper's experiments.

pith-pipeline@v0.9.0 · 5534 in / 1326 out tokens · 81584 ms · 2026-05-08T08:16:27.360203+00:00 · methodology

discussion (0)

Sign in with ORCID, Apple, or X to comment. Anyone can read and Pith papers without signing in.

Reference graph

Works this paper leans on

45 extracted references · 2 canonical work pages · 2 internal anchors

  1. [1]

    Semantic communications: Overview, open issues, and future research directions,

    X. Luo, H.-H. Chen, and Q. Guo, “Semantic communications: Overview, open issues, and future research directions,”IEEE Wireless Commun., vol. 29, no. 1, pp. 210–219, 2022

  2. [2]

    Semantic communications for future internet: Fundamentals, applications, and challenges,

    W. Yang, H. Du, Z. Q. Liew, W. Y . B. Lim, Z. Xiong, D. Niyato, X. Chi, X. Shen, and C. Miao, “Semantic communications for future internet: Fundamentals, applications, and challenges,”IEEE Commun. Surv. Tutor., vol. 25, no. 1, pp. 213–250, 2023

  3. [3]

    Less data, more knowledge: Building next-generation semantic communica- tion networks,

    C. Chaccour, W. Saad, M. Debbah, Z. Han, and H. Vincent Poor, “Less data, more knowledge: Building next-generation semantic communica- tion networks,”IEEE Commun. Surv. Tutor., vol. 27, no. 1, pp. 37–76, 2025

  4. [4]

    Semantic communication: A survey on research landscape, challenges, and future directions,

    T. M. Getu, G. Kaddoum, and M. Bennis, “Semantic communication: A survey on research landscape, challenges, and future directions,”Proc. IEEE, vol. 112, no. 11, pp. 1649–1685, 2024

  5. [5]

    A secure and efficient distributed semantic communication system for heterogeneous internet of things,

    W. Zeng, X. Xu, Q. Zhang, J. Shi, Z. Guan, S. Li, and Z. Qin, “A secure and efficient distributed semantic communication system for heterogeneous internet of things,”IEEE Trans. Mobile Comput., pp. 1– 16, 2026

  6. [6]

    Semantic communications with variable-length coding for extended reality,

    B. Zhang, Z. Qin, and G. Y . Li, “Semantic communications with variable-length coding for extended reality,”IEEE J. Sel. Top. Signal Process., vol. 17, no. 5, pp. 1038–1051, 2023

  7. [7]

    A task- oriented and lightweight semantic communication system with secure federated aggregation in distributed wireless networks,

    J. Shi, Q. Zhang, Y . Xu, W. Zeng, S. Li, Z. Guan, and Z. Qin, “A task- oriented and lightweight semantic communication system with secure federated aggregation in distributed wireless networks,”IEEE Trans. Mobile Comput., pp. 1–16, 2026

  8. [8]

    Semantic communi- cation for edge intelligence enabled autonomous driving system,

    Y . Feng, H. Shen, Z. Shan, Q. Yang, and X. Shi, “Semantic communi- cation for edge intelligence enabled autonomous driving system,”IEEE Netw., vol. 39, no. 2, pp. 149–157, 2025

  9. [9]

    East: Efficient and accurate secure inference framework for transformer,

    Y . Ding, H. Guo, Y . Guan, W. Liu, J. Huo, Z. Guan, and X. Zhang, “East: Efficient and accurate secure inference framework for transformer,” IEEE Trans. Services Comput., vol. 18, no. 4, pp. 2038–2046, 2025

  10. [10]

    Bal- ancing security and efficiency in gai-driven semantic communication: Challenges, solutions, and future paths,

    Q. Zhang, J. Shi, W. Zeng, X. Xu, Z. Guan, S. Li, and Z. Qin, “Bal- ancing security and efficiency in gai-driven semantic communication: Challenges, solutions, and future paths,”IEEE Netw., vol. 39, no. 5, pp. 88–96, 2025

  11. [11]

    Vulnerabilities of deep learning-driven semantic communications to backdoor (trojan) attacks,

    Y . E. Sagduyu, T. Erpek, S. Ulukus, and A. Yener, “Vulnerabilities of deep learning-driven semantic communications to backdoor (trojan) attacks,” inProc. Annu. Conf. Inf. Sci. Syst. (CISS), pp. 1–6, 2023

  12. [12]

    Backdoor attacks and defenses on semantic-symbol reconstruction in semantic communications,

    Y . Zhou, R. Q. Hu, and Y . Qian, “Backdoor attacks and defenses on semantic-symbol reconstruction in semantic communications,” inProc. IEEE Int. Conf. Commun. (ICC), pp. 734–739, 2024

  13. [13]

    Stealthy backdoor attacks on semantic symbols in semantic communications,

    Y . Zhou, R. Q. Hu, and Y . Qian, “Stealthy backdoor attacks on semantic symbols in semantic communications,” inProc. IEEE Glob. Commun. Conf. (GLOBECOM), pp. 4975–4981, 2024

  14. [14]

    Csba: Covert semantic backdoor attack against intelligent connected vehicles,

    X. Xu, Y . Chen, B. Wang, Z. Bian, S. Han, C. Dong, C. Sun, W. Zhang, L. Xu, and P. Zhang, “Csba: Covert semantic backdoor attack against intelligent connected vehicles,”IEEE Trans. Veh. Technol., vol. 73, no. 11, pp. 17923–17928, 2024

  15. [15]

    Backdoor attack and defense on deep learning: A survey,

    Y . Bai, G. Xing, H. Wu, Z. Rao, C. Ma, S. Wang, X. Liu, Y . Zhou, J. Tang, K. Huang, and J. Kang, “Backdoor attack and defense on deep learning: A survey,”IEEE Trans. Comput. Social Syst., vol. 12, no. 1, pp. 404–434, 2025

  16. [16]

    Backdoor learning: A survey,

    Y . Li, Y . Jiang, Z. Li, and S.-T. Xia, “Backdoor learning: A survey,” IEEE Trans. Neural Networks Learn. Syst., vol. 35, no. 1, pp. 5–22, 2024

  17. [17]

    Backdoor attacks and defenses targeting multi-domain ai models: A comprehensive review,

    S. Zhang, Y . Pan, Q. Liu, Z. Yan, K.-K. R. Choo, and G. Wang, “Backdoor attacks and defenses targeting multi-domain ai models: A comprehensive review,”ACM Comput. Surv., vol. 57, no. 4, 2024

  18. [18]

    Silent penetrator: Breaching cross-domain federated fine-tuning via feature shift-induced backdoor,

    W. Huang, G. Li, M. Chen, J. Li, and H. Zhu, “Silent penetrator: Breaching cross-domain federated fine-tuning via feature shift-induced backdoor,”IEEE Trans. Inf. Forensics Secur., vol. 20, pp. 7106–7120, 2025

  19. [19]

    Data and model poisoning backdoor attacks on wireless federated learning, and the defense mechanisms: A comprehensive survey,

    Y . Wan, Y . Qu, W. Ni, Y . Xiang, L. Gao, and E. Hossain, “Data and model poisoning backdoor attacks on wireless federated learning, and the defense mechanisms: A comprehensive survey,”IEEE Commun. Surv. Tutor., vol. 26, no. 3, pp. 1861–1897, 2024

  20. [20]

    Deep joint source- channel coding for wireless image transmission,

    E. Bourtsoulatze, D. Burth Kurka, and D. G ¨und¨uz, “Deep joint source- channel coding for wireless image transmission,”IEEE Trans. Cogn. Commun. Netw., vol. 5, no. 3, pp. 567–579, 2019

  21. [21]

    Deepjscc-f: Deep joint source-channel coding of images with feedback,

    D. B. Kurka and D. G ¨und¨uz, “Deepjscc-f: Deep joint source-channel coding of images with feedback,”IEEE J. Sel. Areas Inf. Theory, vol. 1, no. 1, pp. 178–193, 2020

  22. [22]

    Deepjscc- q: Constellation constrained deep joint source-channel coding,

    T.-Y . Tung, D. B. Kurka, M. Jankowski, and D. G ¨und¨uz, “Deepjscc- q: Constellation constrained deep joint source-channel coding,”IEEE J. Sel. Areas Inf. Theory, vol. 3, no. 4, pp. 720–731, 2022

  23. [23]

    Semanaegis: toward credential-aware semantic communication against knowledge leakage threats,

    X. Yang, Y . Lai, G. Li, J. Wu, K. Zhou, and M. Chen, “Semanaegis: toward credential-aware semantic communication against knowledge leakage threats,”IEEE Trans. Mob. Comput., pp. 1–18, 2025

  24. [24]

    Backdoor defense via deconfounded representation learning,

    Z. Zhang, Q. Liu, Z. Wang, Z. Lu, and Q. Hu, “Backdoor defense via deconfounded representation learning,” inProc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 12228–12238, 2023

  25. [25]

    Task-oriented multi-user semantic communications,

    H. Xie, Z. Qin, X. Tao, and K. B. Letaief, “Task-oriented multi-user semantic communications,”IEEE J. Sel. Areas Commun., vol. 40, no. 9, pp. 2584–2597, 2022

  26. [26]

    Joint source-channel coding for channel-adaptive digital semantic communications,

    J. Park, Y . Oh, S. Kim, and Y .-S. Jeon, “Joint source-channel coding for channel-adaptive digital semantic communications,”IEEE Trans. Cogn. Commun. Netw., vol. 11, no. 1, pp. 75–89, 2025

  27. [27]

    Resource allocation for text semantic communications,

    L. Yan, Z. Qin, R. Zhang, Y . Li, and G. Y . Li, “Resource allocation for text semantic communications,”IEEE Wirel. Commun. Lett., vol. 11, no. 7, pp. 1394–1398, 2022

  28. [28]

    Performance limits of a deep learning-enabled text semantic communication under interference,

    T. M. Getu, W. Saad, G. Kaddoum, and M. Bennis, “Performance limits of a deep learning-enabled text semantic communication under interference,”IEEE Trans. Wirel. Commun., vol. 23, no. 8, pp. 10213– 10228, 2024

  29. [29]

    One-to-n & n-to-one: Two advanced backdoor attacks against deep learning models,

    M. Xue, C. He, J. Wang, and W. Liu, “One-to-n & n-to-one: Two advanced backdoor attacks against deep learning models,”IEEE Trans. Dependable Secure Comput., vol. 19, no. 3, pp. 1562–1578, 2022

  30. [30]

    Multi-target label backdoor attacks on graph neural networks,

    K. Wang, H. Deng, Y . Xu, Z. Liu, and Y . Fang, “Multi-target label backdoor attacks on graph neural networks,”Pattern Recognit., vol. 152, p. 110449, 2024

  31. [31]

    Deep neural backdoor in semi-supervised learning: Threats and countermeasures,

    Z. Yan, J. Wu, G. Li, S. Li, and M. Guizani, “Deep neural backdoor in semi-supervised learning: Threats and countermeasures,”IEEE Trans. Inf. Forensics Secur., vol. 16, pp. 4827–4842, 2021

  32. [32]

    Muldoor: A multi-target backdoor attack against federated learning system,

    X. Li, L. Wu, Z. Guan, X. Du, N. Aitsaadi, and M. Guizani, “Muldoor: A multi-target backdoor attack against federated learning system,” inProc. IEEE Glob. Commun. Conf. (GLOBECOM), pp. 1749–1754, 2024

  33. [33]

    Multitentacle federated learning over software-defined industrial internet of things against adaptive poisoning attacks,

    G. Li, J. Wu, S. Li, W. Yang, and C. Li, “Multitentacle federated learning over software-defined industrial internet of things against adaptive poisoning attacks,”IEEE Trans. Ind. Inf., vol. 19, no. 2, pp. 1260–1269, 2023

  34. [34]

    Attention U-Net: Learning Where to Look for the Pancreas

    O. Oktay, J. Schlemper, L. L. Folgoc, M. C. H. Lee, M. P. Heinrich, K. Misawa, K. Mori, S. G. McDonagh, N. Y . Hammerla, B. Kainz, B. Glocker, and D. Rueckert, “Attention u-net: Learning where to look for the pancreas,”CoRR, vol. abs/1804.03999, 2018

  35. [35]

    Dynamic backdoor attacks against machine learning models,

    A. Salem, R. Wen, M. Backes, S. Ma, and Y . Zhang, “Dynamic backdoor attacks against machine learning models,” inProc. IEEE Euro. Symp. Secur. Priv. (EuroS&P), pp. 703–718, 2022

  36. [36]

    Strip: a defence against trojan attacks on deep neural networks,

    Y . Gao, C. Xu, D. Wang, S. Chen, D. C. Ranasinghe, and S. Nepal, “Strip: a defence against trojan attacks on deep neural networks,” in Proc. Annu. Comput. Secur. Appl. Conf. (ACSAC), p. 113–125, 2019

  37. [37]

    Graphprot: Certified black-box shielding against backdoored graph models,

    X. Yang, Y . Lai, K. Zhou, G. Li, J. Li, and H. Zhang, “Graphprot: Certified black-box shielding against backdoored graph models,” in Proc. Int. Jt. Conf. Artif. Intell. (IJCAI), pp. 619–627, 2025

  38. [38]

    Dhbe: Data-free holistic backdoor erasing in deep neural networks via restricted adversarial dis- tillation,

    Z. Yan, S. Li, R. Zhao, Y . Tian, and Y . Zhao, “Dhbe: Data-free holistic backdoor erasing in deep neural networks via restricted adversarial dis- tillation,” inProc. ACM Asia Conf. Comput. Commun. Secur. (AsiaCCS), pp. 731–745, 2023

  39. [39]

    Neural cleanse: Identifying and mitigating backdoor attacks in neural networks,

    B. Wang, Y . Yao, S. Shan, H. Li, B. Viswanath, H. Zheng, and B. Y . Zhao, “Neural cleanse: Identifying and mitigating backdoor attacks in neural networks,” inProc. IEEE Symp. Secur. Priv. (S&P), pp. 707–723, 2019

  40. [40]

    Scan: Semantic communication with adaptive channel feedback,

    G. Zhang, Q. Hu, Y . Cai, and G. Yu, “Scan: Semantic communication with adaptive channel feedback,”IEEE Trans. Cogn. Commun. Netw., vol. 10, no. 5, pp. 1759–1773, 2024

  41. [41]

    Contrastive learning-based semantic communications,

    S. Tang, Q. Yang, L. Fan, X. Lei, A. Nallanathan, and G. K. Karagianni- dis, “Contrastive learning-based semantic communications,”IEEE Trans. Commun., vol. 72, no. 10, pp. 6328–6343, 2024

  42. [42]

    The mnist database of handwritten digit images for machine learning research,

    L. Deng, “The mnist database of handwritten digit images for machine learning research,”IEEE Signal Process. Mag., vol. 29, no. 6, pp. 141– 142, 2012

  43. [43]

    Fashion-MNIST: a Novel Image Dataset for Benchmarking Machine Learning Algorithms

    H. Xiao, K. Rasul, and R. V ollgraf, “Fashion-mnist: a novel im- age dataset for benchmarking machine learning algorithms,”CoRR, vol. abs/1708.07747, 2017. IEEE TRANSACTIONS ON INFORMATION FORENSICS AND SECURITY , VOL. **, NO. *, **** 20** 16

  44. [44]

    Cifar-10 classification using deep convolutional neural network,

    R. Doon, T. Kumar Rawat, and S. Gautam, “Cifar-10 classification using deep convolutional neural network,” inProc. IEEE Pune Sect. Int. Conf. (PUNECON), pp. 1–5, 2018

  45. [45]

    Imagenet: A large-scale hierarchical image database,

    J. Deng, W. Dong, R. Socher, L.-J. Li, K. Li, and L. Fei-Fei, “Imagenet: A large-scale hierarchical image database,” inProc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 248–255, 2009