pith. machine review for the scientific record. sign in

arxiv: 2604.09489 · v1 · submitted 2026-04-10 · 💻 cs.CR · cs.AI· cs.DC· cs.LG

Recognition: unknown

XFED: Non-Collusive Model Poisoning Attack Against Byzantine-Robust Federated Classifiers

Israt Jahan Mouri, Muhammad Abdullah Adnan, Muhammad Ridowan

Pith reviewed 2026-05-10 16:59 UTC · model grok-4.3

classification 💻 cs.CR cs.AIcs.DCcs.LG
keywords federated learningmodel poisoning attacknon-collusive attackByzantine-robust aggregationadversarial machine learningfederated classifiersdistributed security
0
0 comments X

The pith

Independent attackers can poison federated classifiers without communicating or knowing server defenses.

A machine-rendered reading of the paper's core claim, the machinery that carries it, and where it could break.

The paper establishes that model poisoning attacks on federated learning do not require any coordination among attackers. It formalizes a non-collusive model in which each compromised client shares only the goal of harming the global model and generates its own poisoned update in isolation. XFED implements this idea as an aggregation-agnostic method that crafts malicious updates without access to other clients' data or knowledge of defenses. Experiments across six datasets show the attack evades eight existing defenses and exceeds the success of six prior poisoning techniques. This would mean federated systems can be degraded by isolated compromised devices rather than needing large coordinated groups.

Core claim

Under the non-collusive attack model, where all compromised clients pursue a shared adversarial objective but operate independently without communication, knowledge of other updates, or information about server defenses, it is possible to generate poisoned model updates that degrade Byzantine-robust federated classifiers; XFED demonstrates this by remaining effective regardless of the aggregation rule used at the server.

What carries the argument

XFED, an aggregation-agnostic procedure that each attacker runs locally to produce a poisoned update aimed at the shared malicious goal.

Load-bearing premise

Independent attackers without communication or knowledge of defenses can still craft poisoned updates strong enough to degrade the global model.

What would settle it

An evaluation in which XFED produces no measurable accuracy drop on one of the six benchmark datasets or against one of the eight tested defenses would show the attack does not hold under the claimed conditions.

Figures

Figures reproduced from arXiv: 2604.09489 by Israt Jahan Mouri, Muhammad Abdullah Adnan, Muhammad Ridowan.

Figure 1
Figure 1. Figure 1: Attack Impact Iθ on global models with increasing malicious clients across various defenses and attacks (MNIST dataset). However, FLTrust mitigates the Xuv attack and other attacks by using a small trusted root dataset in the server to compute a reference gradient that allows it to filter updates that de￾viate from the expected direction. While [PITH_FULL_IMAGE:figures/full_fig_p007_1.png] view at source ↗
Figure 2
Figure 2. Figure 2: Attack impact Iθ on global models as a function of the degree of non-IID under different defenses and attacks (Purchase Dataset). in each training round, the server selects a random subset of 1% (500 clients) to participate in training. Similar to cross-silo setting, Xuv outperforms all other attacks for all the combinations of aggregations and defenses (except FLTrust) by a substantial margin in almost al… view at source ↗
Figure 3
Figure 3. Figure 3: Attack Impact Iθ on global models with increasing % of malicious clients for different aggregations, defenses, and attacks in a cross-silo setting (Purchase dataset). 0 5 10 15 20 25 30 0 10 20 30 40 50 60 70 80 90 Attack Impact MNIST, FedAvg, Cross-silo 0 5 10 15 20 25 30 MNIST, Median, Cross-silo 0 5 10 15 20 25 30 Purchase, FedAvg, Cross-silo 0 5 10 15 20 25 30 0 10 20 30 40 50 60 70 80 90 Attack Impact… view at source ↗
Figure 4
Figure 4. Figure 4: Attack Impact Iθ on global model with increasing % of malicious clients for different attacks and aggregations (FedAvg and Median) in different FL settings for different datasets (MNIST & Purchase). the Purchase and MNIST (FedAvg and Median) datasets. We observe that none of the state-of-the-art aggregations or defenses, except for FLTrust, can effectively mitigate our pro￾posed Xuv attack across all perce… view at source ↗
Figure 5
Figure 5. Figure 5: Attack Impact Iθ on global models with increasing % of malicious clients for different aggregations, defenses, and attacks in a cross-device setting (Purchase dataset). .1 .3 .5 .7 .9 0 20 40 60 80 100 Attack Impact M-Krum .1 .3 .5 .7 .9 Tr-Mean .1 .3 .5 .7 .9 CC .1 .3 .5 .7 .9 SgnG .1 .3 .5 .7 .9 FoolsGold .1 .3 .5 .7 .9 FLAME .1 .3 .5 .7 .9 FreqFed .1 .3 .5 .7 .9 FLTrust Aθ Xuv Xsgn M in − Max F ang LIE … view at source ↗
Figure 6
Figure 6. Figure 6: Attack impact Iθ on global models as a function of the degree of non-IID for different aggregations, defenses, and attacks (MNIST Dataset). .1 .3 .5 .7 .9 0 20 40 60 80 100 Attack Impact MNIST, Fed-Avg .1 .3 .5 .7 .9 MNIST, Median .1 .3 .5 .7 .9 Purchase, Fed-Avg .1 .3 .5 .7 .9 Purchase, Median Aθ Xuv Xsgn M in − Max F ang LIE M in − Sum MP AF P oisonedF L [PITH_FULL_IMAGE:figures/full_fig_p019_6.png] view at source ↗
Figure 7
Figure 7. Figure 7: Attack Impact Iθ on global model as a function of the degree of non-IID for different attacks and aggregations (FedAvg and Median) for different datasets (MNIST & Purchase). Clipped Clustering, SignGuard, FoolsGold, FLAME, and FreqFed. As p increases and the data become more unevenly distributed across clients, the performance of Xuv does not degrade; the attack impact stays roughly the same for all levels… view at source ↗
Figure 8
Figure 8. Figure 8: Impact of the history window size Ω on the execution time of the Xuv attack (20% malicious clients) on the MNIST dataset in the cross-silo setting. As Ω increases, the runtime grows noticeably. device, it may not be practical to store so many global deltas. Especially if the attacker is a silent malware type hiding on a mobile device. In practice, we suggest that an attacker store the last Ω global deltas.… view at source ↗
Figure 9
Figure 9. Figure 9: Effect of λ on the attack impact Iθ of the Xuv attack (20% malicious clients) on the MNIST dataset in cross-silo setting. In all experiments, we set Ω = 8. Impact of λ. As discussed in Appendix A.2, the recom￾mended range for λ, derived from outlier detection guide￾lines in the literature and common practice, is [2, 10] [PITH_FULL_IMAGE:figures/full_fig_p020_9.png] view at source ↗
read the original abstract

Model poisoning attacks pose a significant security threat to Federated Learning (FL). Most existing model poisoning attacks rely on collusion, requiring adversarial clients to coordinate by exchanging local benign models and synchronizing the generation of their poisoned updates. However, sustaining such coordination is increasingly impractical in real-world FL deployments, as it effectively requires botnet-like control over many devices. This approach is costly to maintain and highly vulnerable to detection. This context raises a fundamental question: Can model poisoning attacks remain effective without any communication between attackers? To address this challenge, we introduce and formalize the \textbf{non-collusive attack model}, in which all compromised clients share a common adversarial objective but operate independently. Under this model, each attacker generates its malicious update without communicating with other adversaries, accessing other clients' updates, or relying on any knowledge of server-side defenses. To demonstrate the feasibility of this threat model, we propose \textbf{XFED}, the first aggregation-agnostic, non-collusive model poisoning attack. Our empirical evaluation across six benchmark datasets shows that XFED bypasses eight state-of-the-art defenses and outperforms six existing model poisoning attacks. These findings indicate that FL systems are substantially less secure than previously believed and underscore the urgent need for more robust and practical defense mechanisms.

Editorial analysis

A structured set of objections, weighed in public.

Desk editor's note, referee report, simulated authors' rebuttal, and a circularity audit. Tearing a paper down is the easy half of reading it; the pith above is the substance, this is the friction.

Referee Report

3 major / 2 minor

Summary. The manuscript formalizes a non-collusive model poisoning threat model for federated learning, in which compromised clients share only a common adversarial objective but generate poisoned updates independently without communication, knowledge of other updates, or server defenses. It proposes XFED as the first aggregation-agnostic attack under this model and reports empirical results on six benchmark datasets showing that XFED bypasses eight state-of-the-art Byzantine-robust defenses while outperforming six prior model poisoning attacks.

Significance. If the results hold under the stated non-collusive constraints, the work would indicate that existing Byzantine-robust FL aggregations remain vulnerable to independent attackers, motivating stronger defenses that do not rely on assumptions of coordination or outlier detection alone. The multi-dataset, multi-defense evaluation provides a useful empirical baseline for future comparisons.

major comments (3)
  1. [§4] §4 (XFED Attack Design): The poison-generation procedure is described as operating solely from the current global model and local data, yet the manuscript provides no analysis or bound showing that independently generated poisons will form a sufficiently tight cluster to survive coordinate-wise median, trimmed mean, or Krum aggregation when local data distributions differ across attackers. Without such justification or variance measurements, the central claim that the attack remains effective under realistic heterogeneity is not yet load-bearing.
  2. [§5.2] §5.2 (Experimental Evaluation): The reported success rates against the eight defenses lack per-run standard deviations, ablation on attacker fraction and data heterogeneity levels, and explicit comparison of poison-direction variance across independent attackers. These omissions make it impossible to assess whether the observed degradation is robust or sensitive to the implicit assumption of statistically identical attacker data.
  3. [§3.1] §3.1 (Threat Model): The non-collusive model explicitly disallows knowledge of server defenses, but the XFED procedure appears to require a scaling or direction choice that could implicitly depend on defense behavior; the manuscript should clarify whether any hyper-parameter tuning was performed with oracle access to the aggregator.
minor comments (2)
  1. [Figures/Tables] Figure 3 and Table 4: Axis labels and legend entries are too small for readability; increase font size and add error bars where multiple runs are implied.
  2. [§2] §2 (Related Work): The comparison table omits the exact hyper-parameter settings used for the six baseline attacks, hindering direct reproducibility.

Simulated Author's Rebuttal

3 responses · 0 unresolved

We thank the referee for the constructive comments and the recommendation for major revision. The feedback highlights important aspects of the non-collusive threat model and evaluation that we will strengthen in the revised manuscript. We address each major comment point by point below.

read point-by-point responses
  1. Referee: [§4] §4 (XFED Attack Design): The poison-generation procedure is described as operating solely from the current global model and local data, yet the manuscript provides no analysis or bound showing that independently generated poisons will form a sufficiently tight cluster to survive coordinate-wise median, trimmed mean, or Krum aggregation when local data distributions differ across attackers. Without such justification or variance measurements, the central claim that the attack remains effective under realistic heterogeneity is not yet load-bearing.

    Authors: We agree that the manuscript would benefit from explicit analysis of poison clustering under heterogeneity. While the empirical results on six heterogeneous benchmark datasets support effectiveness, we did not include dedicated variance measurements or a bound. In the revision we will add to §4 empirical measurements of cosine similarity and directional variance among independently generated poisons across attackers with differing local distributions (using Dirichlet heterogeneity), together with a short argument that the shared global model and common objective induce sufficient alignment for the poisons to survive the listed aggregators. This addition will make the heterogeneity claim load-bearing. revision: yes

  2. Referee: [§5.2] §5.2 (Experimental Evaluation): The reported success rates against the eight defenses lack per-run standard deviations, ablation on attacker fraction and data heterogeneity levels, and explicit comparison of poison-direction variance across independent attackers. These omissions make it impossible to assess whether the observed degradation is robust or sensitive to the implicit assumption of statistically identical attacker data.

    Authors: We acknowledge that the current evaluation omits these elements. The revised manuscript will add per-run standard deviations to all success-rate tables and figures in §5.2, new ablation tables varying attacker fraction (10–30 %) and data heterogeneity (Dirichlet α values), and an explicit comparison (new figure or table) of poison-direction variance across independent attackers. These additions will demonstrate that the observed degradation is robust rather than sensitive to identical attacker data. revision: yes

  3. Referee: [§3.1] §3.1 (Threat Model): The non-collusive model explicitly disallows knowledge of server defenses, but the XFED procedure appears to require a scaling or direction choice that could implicitly depend on defense behavior; the manuscript should clarify whether any hyper-parameter tuning was performed with oracle access to the aggregator.

    Authors: We clarify that XFED is constructed to be aggregation-agnostic and uses no information about server defenses. The scaling factor and direction are determined exclusively from the adversarial objective, the current global model, and local data; no hyper-parameter was tuned with oracle access to any aggregator. We will revise §3.1 and §4 to state this explicitly, list the fixed hyper-parameter values, and confirm that all choices follow the non-collusive threat model without reference to defense behavior. revision: partial

Circularity Check

0 steps flagged

No circularity: purely empirical attack proposal

full rationale

The paper introduces a non-collusive attack model and XFED via textual definition and empirical evaluation on six datasets against eight defenses. No equations, parameter fittings, or derivations appear in the provided text. Central claims rest on experimental outcomes rather than reducing to self-definitions, fitted inputs renamed as predictions, or load-bearing self-citations. The work is self-contained against external benchmarks with no reduction of any result to its own inputs by construction.

Axiom & Free-Parameter Ledger

0 free parameters · 1 axioms · 0 invented entities

The central claim rests on the domain assumption that a non-collusive threat model is realistic and that effective poisoned updates can be generated independently without knowledge of defenses or other clients.

axioms (1)
  • domain assumption Independent attackers sharing only a common objective can still produce effective model poisoning updates without communication or knowledge of server defenses.
    This premise is required for the non-collusive model to be a meaningful threat; it is stated but not proven in the abstract.

pith-pipeline@v0.9.0 · 5539 in / 1221 out tokens · 50424 ms · 2026-05-10T16:59:14.452550+00:00 · methodology

discussion (0)

Sign in with ORCID, Apple, or X to comment. Anyone can read and Pith papers without signing in.

Reference graph

Works this paper leans on

58 extracted references · 10 canonical work pages · 2 internal anchors

  1. [1]

    A public domain dataset for human activity recognition using smartphones

    Davide Anguita, Alessandro Ghio, Luca Oneto, Xavier Parra, Jorge Luis Reyes-Ortiz, et al. A public domain dataset for human activity recognition using smartphones. InEsann, pages 3–4, 2013

  2. [2]

    How to backdoor federated learning

    Eugene Bagdasaryan, Andreas Veit, Yiqing Hua, Deborah Estrin, and Vitaly Shmatikov. How to backdoor federated learning. InInternational Conference on Artificial Intelli- gence and Statistics, pages 2938–2948. PMLR, 2020

  3. [3]

    A little is enough: Circumventing defenses for distributed learning.Ad- vances in Neural Information Processing Systems, 32, 2019

    Gilad Baruch, Moran Baruch, and Yoav Goldberg. A little is enough: Circumventing defenses for distributed learning.Ad- vances in Neural Information Processing Systems, 32, 2019

  4. [4]

    signsgd with majority vote is communication efficient and fault tolerant.arXiv preprint arXiv:1810.05291, 2018

    Jeremy Bernstein, Jiawei Zhao, Kamyar Azizzadenesheli, and Anima Anandkumar. signsgd with majority vote is communication efficient and fault tolerant.arXiv preprint arXiv:1810.05291, 2018

  5. [5]

    Analyzing federated learning through an adversarial lens

    Arjun Nitin Bhagoji, Supriyo Chakraborty, Prateek Mittal, and Seraphin Calo. Analyzing federated learning through an adversarial lens. InInternational Conference on Machine Learning, pages 634–643. PMLR, 2019

  6. [7]

    Machine learning with adversaries: Byzantine tolerant gradient descent.Advances in neural information processing systems, 30, 2017

    Peva Blanchard, El Mahdi El Mhamdi, Rachid Guerraoui, and Julien Stainer. Machine learning with adversaries: Byzantine tolerant gradient descent.Advances in neural information processing systems, 30, 2017

  7. [8]

    Mpaf: Model poi- soning attacks to federated learning based on fake clients

    Xiaoyu Cao and Neil Zhenqiang Gong. Mpaf: Model poi- soning attacks to federated learning based on fake clients. InProceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages 3396–3404, 2022

  8. [9]

    Fltrust: Byzantine-robust federated learning via trust bootstrapping

    Xiaoyu Cao, Minghong Fang, Jia Liu, and Neil Zhenqiang Gong. Fltrust: Byzantine-robust federated learning via trust bootstrapping. InISOC Network and Distributed System Security Symposium (NDSS), 2021

  9. [10]

    Draco: Byzantine-resilient distributed train- ing via redundant gradients

    Lingjiao Chen, Hongyi Wang, Zachary Charles, and Dimitris Papailiopoulos. Draco: Byzantine-resilient distributed train- ing via redundant gradients. InInternational Conference on Machine Learning, pages 903–912. PMLR, 2018

  10. [11]

    Emnist: Extending mnist to handwritten letters

    Gregory Cohen, Saeed Afshar, Jonathan Tapson, and Andre Van Schaik. Emnist: Extending mnist to handwritten letters. In2017 international joint conference on neural networks (IJCNN), pages 2921–2926. IEEE, 2017

  11. [12]

    The mnist database of handwritten digit images for machine learning research [best of the web].IEEE signal processing magazine, 29(6):141–142, 2012

    Li Deng. The mnist database of handwritten digit images for machine learning research [best of the web].IEEE signal processing magazine, 29(6):141–142, 2012

  12. [13]

    Being robust (in high dimensions) can be practical

    Ilias Diakonikolas, Gautam Kamath, Daniel M Kane, Jerry Li, Ankur Moitra, and Alistair Stewart. Being robust (in high dimensions) can be practical. InInternational Conference on Machine Learning, pages 999–1008. PMLR, 2017

  13. [14]

    Sever: A robust meta- algorithm for stochastic optimization

    Ilias Diakonikolas, Gautam Kamath, Daniel Kane, Jerry Li, Jacob Steinhardt, and Alistair Stewart. Sever: A robust meta- algorithm for stochastic optimization. InInternational Confer- ence on Machine Learning, pages 1596–1606. PMLR, 2019

  14. [15]

    Malware and virus statistics 2024: The trends you need to know about

    Jessica Valasek Estenssoro. Malware and virus statistics 2024: The trends you need to know about. https://www.avg. com/en/signal/malware-statistics, 2024. Pub- lished on A VG Antivirus’ website

  15. [16]

    Falowo, Murat Ozer, Chengcheng Li, and Jacques Bou Abdo

    Olufunsho I. Falowo, Murat Ozer, Chengcheng Li, and Jacques Bou Abdo. Evolving malware and ddos attacks: Decadal longitudinal study.IEEE Access, 12:39221–39237, 2024

  16. [17]

    Local model poisoning attacks to {Byzantine-Robust} feder- ated learning

    Minghong Fang, Xiaoyu Cao, Jinyuan Jia, and Neil Gong. Local model poisoning attacks to {Byzantine-Robust} feder- ated learning. In29th USENIX Security Symposium (USENIX Security 20), pages 1605–1622, 2020

  17. [18]

    Freqfed: A frequency analysis-based approach for mitigating poisoning attacks in federated learning

    Hossein Fereidooni, Alessandro Pegoraro, Phillip Rieger, Alexandra Dmitrienko, and Ahmad-Reza Sadeghi. Freqfed: A frequency analysis-based approach for mitigating poisoning attacks in federated learning. 2025

  18. [19]

    arXiv preprint arXiv:1808.04866 (2018)

    Clement Fung, Chris JM Yoon, and Ivan Beschastnikh. Miti- gating sybils in federated learning poisoning.arXiv preprint arXiv:1808.04866, 2018

  19. [20]

    The hidden vul- nerability of distributed learning in byzantium

    Rachid Guerraoui, S´ebastien Rouault, et al. The hidden vul- nerability of distributed learning in byzantium. InInterna- tional Conference on Machine Learning, pages 3521–3530. PMLR, 2018

  20. [21]

    Federated Learning for Mobile Keyboard Prediction

    Andrew Hard, Kanishka Rao, Rajiv Mathews, Swaroop Ra- maswamy, Franc ¸oise Beaufays, Sean Augenstein, Hubert Eichner, Chlo ´e Kiddon, and Daniel Ramage. Federated learning for mobile keyboard prediction.arXiv preprint arXiv:1811.03604, 2018

  21. [22]

    Understanding robust and exploratory data analysis

    David C Hoaglin, Frederick Mosteller, and John W Tukey. Understanding robust and exploratory data analysis. John Wiley & Sons, 2000

  22. [23]

    Cronus: Ro- bust and heterogeneous collaborative learning with black-box knowledge transfer,

    Chang Hongyan, Shejwalkar Virat, Shokri Reza, and Houmansadr Amir. Cronus: Robust and heterogeneous col- laborative learning with black-box knowledge transfer.arXiv preprint arXiv:1912.11279, 2019

  23. [24]

    Cross-silo feder- ated learning: Challenges and opportunities.arXiv preprint arXiv:2206.12949, 2022

    Chao Huang, Jianwei Huang, and Xin Liu. Cross-silo feder- ated learning: Challenges and opportunities.arXiv preprint arXiv:2206.12949, 2022

  24. [25]

    Personalized cross-silo federated learning on non-iid data.Proceedings of the AAAI Conference on Artificial Intelligence, 35(9):7865– 7873, 2021

    Yutao Huang, Lingyang Chu, Zirui Zhou, Lanjun Wang, Jiangchuan Liu, Jian Pei, and Yong Zhang. Personalized cross-silo federated learning on non-iid data.Proceedings of the AAAI Conference on Artificial Intelligence, 35(9):7865– 7873, 2021

  25. [26]

    Quality Press, 1993

    Boris Iglewicz and David C Hoaglin.Volume 16: how to detect and handle outliers. Quality Press, 1993

  26. [27]

    Manipulating machine learn- ing: Poisoning attacks and countermeasures for regression learning

    Matthew Jagielski, Alina Oprea, Battista Biggio, Chang Liu, Cristina Nita-Rotaru, and Bo Li. Manipulating machine learn- ing: Poisoning attacks and countermeasures for regression learning. In2018 IEEE Symposium on Security and Privacy (SP), pages 19–35. IEEE, 2018

  27. [28]

    Breaking the centralized barrier for cross-device federated learning

    Sai Praneeth Karimireddy, Martin Jaggi, Satyen Kale, Mehryar Mohri, Sashank Reddi, Sebastian U Stich, and Ananda Theertha Suresh. Breaking the centralized barrier for cross-device federated learning. 34:28663–28676, 2021. 9

  28. [29]

    Federated Optimization: Distributed Machine Learning for On-Device Intelligence

    Jakub Koneˇcn`y, H Brendan McMahan, Daniel Ramage, and Peter Richt ´arik. Federated optimization: Distributed ma- chine learning for on-device intelligence.arXiv preprint arXiv:1610.02527, 2016

  29. [30]

    Federated Learning: Strategies for Improving Communication Efficiency

    Jakub Koneˇcn`y, H Brendan McMahan, Felix X Yu, Peter Richt´arik, Ananda Theertha Suresh, and Dave Bacon. Feder- ated learning: Strategies for improving communication effi- ciency.arXiv preprint arXiv:1610.05492, 2016

  30. [31]

    Byzshield: An efficient and robust system for distributed training.Proceedings of Machine Learning and Systems, 3: 812–828, 2021

    Konstantinos Konstantinidis and Aditya Ramamoorthy. Byzshield: An efficient and robust system for distributed training.Proceedings of Machine Learning and Systems, 3: 812–828, 2021

  31. [32]

    Learning multiple layers of features from tiny images, 2009

    Alex Krizhevsky, Geoffrey Hinton, et al. Learning multiple layers of features from tiny images, 2009

  32. [33]

    Stuxnet: Dissecting a cyberwarfare weapon

    Ralph Langner. Stuxnet: Dissecting a cyberwarfare weapon. IEEE Security & Privacy, 9(3):49–51, 2011

  33. [34]

    Rsa: Byzantine-robust stochastic aggregation methods for distributed learning from heterogeneous datasets

    Liping Li, Wei Xu, Tianyi Chen, Georgios B Giannakis, and Qing Ling. Rsa: Byzantine-robust stochastic aggregation methods for distributed learning from heterogeneous datasets. InProceedings of the AAAI conference on artificial intelli- gence, pages 1544–1551, 2019

  34. [35]

    An ex- perimental study of byzantine-robust aggregation schemes in federated learning.IEEE Transactions on Big Data, 2023

    Shenghui Li, Edith C-H Ngai, and Thiemo V oigt. An ex- perimental study of byzantine-robust aggregation schemes in federated learning.IEEE Transactions on Big Data, 2023

  35. [36]

    Ensemble distillation for robust model fusion in federated learning.Advances in neural information processing systems, 33:2351–2363, 2020

    Tao Lin, Lingjing Kong, Sebastian U Stich, and Martin Jaggi. Ensemble distillation for robust model fusion in federated learning.Advances in neural information processing systems, 33:2351–2363, 2020

  36. [37]

    Universal multi-party poisoning attacks

    Saeed Mahloujifar, Mohammad Mahmoody, and Ameer Mo- hammed. Universal multi-party poisoning attacks. InInterna- tional Conference on Machine Learning, pages 4274–4283. PMLR, 2019

  37. [38]

    Communication- Efficient Learning of Deep Networks from Decentralized Data

    Brendan McMahan, Eider Moore, Daniel Ramage, Seth Hampson, and Blaise Aguera y Arcas. Communication- Efficient Learning of Deep Networks from Decentralized Data. InProceedings of the 20th International Conference on Artificial Intelligence and Statistics, pages 1273–1282. PMLR, 2017

  38. [39]

    Communication- efficient learning of deep networks from decentralized data

    Brendan McMahan, Eider Moore, Daniel Ramage, Seth Hampson, and Blaise Aguera y Arcas. Communication- efficient learning of deep networks from decentralized data. In Artificial intelligence and statistics, pages 1273–1282. PMLR, 2017

  39. [40]

    Towards poisoning of federated support vector machines with data poisoning attacks

    Israt Jahan Mouri, Muhammad Ridowan, and Muhammad Ab- dullah Adnan. Towards poisoning of federated support vector machines with data poisoning attacks. InProceedings of the 13th International Conference on Cloud Computing and Ser- vices Science - CLOSER,, pages 24–33. INSTICC, SciTePress, 2023

  40. [41]

    Data poisoning attacks and mitigation strate- gies on federated support vector machines.SN Computer Science, 5(2):241, 2024

    Israt Jahan Mouri, Muhammad Ridowan, and Muhammad Ab- dullah Adnan. Data poisoning attacks and mitigation strate- gies on federated support vector machines.SN Computer Science, 5(2):241, 2024

  41. [42]

    Towards poisoning of deep learning algorithms with back-gradient optimization

    Luis Mu ˜noz-Gonz´alez, Battista Biggio, Ambra Demontis, Andrea Paudice, Vasin Wongrassamee, Emil C Lupu, and Fabio Roli. Towards poisoning of deep learning algorithms with back-gradient optimization. InProceedings of the 10th ACM workshop on artificial intelligence and security, pages 27–38, 2017

  42. [43]

    {FLAME}: Taming backdoors in federated learning

    Thien Duc Nguyen, Phillip Rieger, Roberta De Viti, Huili Chen, Bj ¨orn B Brandenburg, Hossein Yalame, Helen M¨ollering, Hossein Fereidooni, Samuel Marchal, Markus Miettinen, et al. {FLAME}: Taming backdoors in federated learning. In31st USENIX Security Symposium (USENIX Security 22), pages 1415–1432, 2022

  43. [44]

    Detox: A redundancy-based framework for faster and more robust gradient aggregation.Advances in Neural Information Processing Systems, 32, 2019

    Shashank Rajput, Hongyi Wang, Zachary Charles, and Dim- itris Papailiopoulos. Detox: A redundancy-based framework for faster and more robust gradient aggregation.Advances in Neural Information Processing Systems, 32, 2019

  44. [45]

    Trustfed: A framework for fair and trustworthy cross-device federated learning in iiot.IEEE Transactions on Industrial Informatics, 17(12):8485–8494, 2021

    Muhammad Habib ur Rehman, Ahmed Mukhtar Dirir, Khaled Salah, Ernesto Damiani, and Davor Svetinovic. Trustfed: A framework for fair and trustworthy cross-device federated learning in iiot.IEEE Transactions on Industrial Informatics, 17(12):8485–8494, 2021

  45. [46]

    Characterization of android malwares and their families.ACM Comput

    Tejpal Sharma and Dhavleesh Rattan. Characterization of android malwares and their families.ACM Comput. Surv., 57 (5), 2025

  46. [47]

    Manipulating the byzantine: Optimizing model poisoning attacks and defenses for federated learning

    Virat Shejwalkar and Amir Houmansadr. Manipulating the byzantine: Optimizing model poisoning attacks and defenses for federated learning. InNDSS, 2021

  47. [48]

    Back to the drawing board: A critical evalu- ation of poisoning attacks on production federated learning

    Virat Shejwalkar, Amir Houmansadr, Peter Kairouz, and Daniel Ramage. Back to the drawing board: A critical evalu- ation of poisoning attacks on production federated learning. InIEEE Symposium on Security and Privacy, 2022

  48. [49]

    Vahunt: Warding off new repackaged android malware in app-virtualization’s cloth- ing

    Luman Shi, Jiang Ming, Jianming Fu, Guojun Peng, Dong- peng Xu, Kun Gao, and Xuanchen Pan. Vahunt: Warding off new repackaged android malware in app-virtualization’s cloth- ing. InProceedings of the 2020 ACM SIGSAC conference on computer and communications security, page 535–549, New York, NY , USA, 2020. Association for Computing Machinery

  49. [50]

    Data poisoning attacks on federated machine learning.IEEE Internet of Things Journal, 2021

    Gan Sun, Yang Cong, Jiahua Dong, Qiang Wang, Lingjuan Lyu, and Ji Liu. Data poisoning attacks on federated machine learning.IEEE Internet of Things Journal, 2021

  50. [51]

    Can you really backdoor federated learning?

    Ziteng Sun, Peter Kairouz, Ananda Theertha Suresh, and H Brendan McMahan. Can you really backdoor federated learning?arXiv preprint arXiv:1911.07963, 2019

  51. [52]

    Data poisoning attacks against federated learning systems

    Vale Tolpegin, Stacey Truex, Mehmet Emre Gursoy, and Ling Liu. Data poisoning attacks against federated learning systems. InEuropean Symposium on Research in Computer Security, pages 480–501. Springer, 2020

  52. [53]

    Attack of the tails: Yes, you really can backdoor federated learning.Advances in Neural Information Processing Systems, 33:16070–16084, 2020

    Hongyi Wang, Kartik Sreenivasan, Shashank Rajput, Harit Vishwakarma, Saurabh Agarwal, Jy-yong Sohn, Kangwook Lee, and Dimitris Papailiopoulos. Attack of the tails: Yes, you really can backdoor federated learning.Advances in Neural Information Processing Systems, 33:16070–16084, 2020

  53. [54]

    Customer purchase behavior prediction from payment datasets

    Yu-Ting Wen, Pei-Wen Yeh, Tzu-Hao Tsai, Wen-Chih Peng, and Hong-Han Shuai. Customer purchase behavior prediction from payment datasets. InProceedings of the Eleventh ACM International Conference on Web Search and Data Mining, pages 628–636, 2018

  54. [55]

    Fashion-MNIST: a Novel Image Dataset for Benchmarking Machine Learning Algorithms

    Han Xiao, Kashif Rasul, and Roland V ollgraf. Fashion-mnist: a novel image dataset for benchmarking machine learning algorithms.arXiv preprint arXiv:1708.07747, 2017

  55. [56]

    Fall of empires: Breaking byzantine-tolerant sgd by inner product 10 manipulation

    Cong Xie, Oluwasanmi Koyejo, and Indranil Gupta. Fall of empires: Breaking byzantine-tolerant sgd by inner product 10 manipulation. InUncertainty in Artificial Intelligence, pages 261–270. PMLR, 2020

  56. [57]

    Model poisoning attacks to federated learning via multi-round con- sistency

    Yueqi Xie, Minghong Fang, and Neil Zhenqiang Gong. Model poisoning attacks to federated learning via multi-round con- sistency. InProceedings of the Computer Vision and Pattern Recognition Conference, pages 15454–15463, 2025

  57. [58]

    Byzantine-robust federated learning through collaborative malicious gradient filtering

    Jian Xu, Shao-Lun Huang, Linqi Song, and Tian Lan. Byzantine-robust federated learning through collaborative malicious gradient filtering. In2022 IEEE 42nd International Conference on Distributed Computing Systems (ICDCS), pages 1223–1235. IEEE, 2022

  58. [59]

    backwards

    Dong Yin, Yudong Chen, Ramchandran Kannan, and Peter Bartlett. Byzantine-robust distributed learning: Towards opti- mal statistical rates. InInternational Conference on Machine Learning, pages 5650–5659. PMLR, 2018. 11 XFED: Non-Collusive Model Poisoning Attack Against Byzantine-Robust Federated Classifiers Supplementary Material A. Design choices for XFE...