Recognition: unknown
XFED: Non-Collusive Model Poisoning Attack Against Byzantine-Robust Federated Classifiers
Pith reviewed 2026-05-10 16:59 UTC · model grok-4.3
The pith
Independent attackers can poison federated classifiers without communicating or knowing server defenses.
A machine-rendered reading of the paper's core claim, the machinery that carries it, and where it could break.
Core claim
Under the non-collusive attack model, where all compromised clients pursue a shared adversarial objective but operate independently without communication, knowledge of other updates, or information about server defenses, it is possible to generate poisoned model updates that degrade Byzantine-robust federated classifiers; XFED demonstrates this by remaining effective regardless of the aggregation rule used at the server.
What carries the argument
XFED, an aggregation-agnostic procedure that each attacker runs locally to produce a poisoned update aimed at the shared malicious goal.
Load-bearing premise
Independent attackers without communication or knowledge of defenses can still craft poisoned updates strong enough to degrade the global model.
What would settle it
An evaluation in which XFED produces no measurable accuracy drop on one of the six benchmark datasets or against one of the eight tested defenses would show the attack does not hold under the claimed conditions.
Figures
read the original abstract
Model poisoning attacks pose a significant security threat to Federated Learning (FL). Most existing model poisoning attacks rely on collusion, requiring adversarial clients to coordinate by exchanging local benign models and synchronizing the generation of their poisoned updates. However, sustaining such coordination is increasingly impractical in real-world FL deployments, as it effectively requires botnet-like control over many devices. This approach is costly to maintain and highly vulnerable to detection. This context raises a fundamental question: Can model poisoning attacks remain effective without any communication between attackers? To address this challenge, we introduce and formalize the \textbf{non-collusive attack model}, in which all compromised clients share a common adversarial objective but operate independently. Under this model, each attacker generates its malicious update without communicating with other adversaries, accessing other clients' updates, or relying on any knowledge of server-side defenses. To demonstrate the feasibility of this threat model, we propose \textbf{XFED}, the first aggregation-agnostic, non-collusive model poisoning attack. Our empirical evaluation across six benchmark datasets shows that XFED bypasses eight state-of-the-art defenses and outperforms six existing model poisoning attacks. These findings indicate that FL systems are substantially less secure than previously believed and underscore the urgent need for more robust and practical defense mechanisms.
Editorial analysis
A structured set of objections, weighed in public.
Referee Report
Summary. The manuscript formalizes a non-collusive model poisoning threat model for federated learning, in which compromised clients share only a common adversarial objective but generate poisoned updates independently without communication, knowledge of other updates, or server defenses. It proposes XFED as the first aggregation-agnostic attack under this model and reports empirical results on six benchmark datasets showing that XFED bypasses eight state-of-the-art Byzantine-robust defenses while outperforming six prior model poisoning attacks.
Significance. If the results hold under the stated non-collusive constraints, the work would indicate that existing Byzantine-robust FL aggregations remain vulnerable to independent attackers, motivating stronger defenses that do not rely on assumptions of coordination or outlier detection alone. The multi-dataset, multi-defense evaluation provides a useful empirical baseline for future comparisons.
major comments (3)
- [§4] §4 (XFED Attack Design): The poison-generation procedure is described as operating solely from the current global model and local data, yet the manuscript provides no analysis or bound showing that independently generated poisons will form a sufficiently tight cluster to survive coordinate-wise median, trimmed mean, or Krum aggregation when local data distributions differ across attackers. Without such justification or variance measurements, the central claim that the attack remains effective under realistic heterogeneity is not yet load-bearing.
- [§5.2] §5.2 (Experimental Evaluation): The reported success rates against the eight defenses lack per-run standard deviations, ablation on attacker fraction and data heterogeneity levels, and explicit comparison of poison-direction variance across independent attackers. These omissions make it impossible to assess whether the observed degradation is robust or sensitive to the implicit assumption of statistically identical attacker data.
- [§3.1] §3.1 (Threat Model): The non-collusive model explicitly disallows knowledge of server defenses, but the XFED procedure appears to require a scaling or direction choice that could implicitly depend on defense behavior; the manuscript should clarify whether any hyper-parameter tuning was performed with oracle access to the aggregator.
minor comments (2)
- [Figures/Tables] Figure 3 and Table 4: Axis labels and legend entries are too small for readability; increase font size and add error bars where multiple runs are implied.
- [§2] §2 (Related Work): The comparison table omits the exact hyper-parameter settings used for the six baseline attacks, hindering direct reproducibility.
Simulated Author's Rebuttal
We thank the referee for the constructive comments and the recommendation for major revision. The feedback highlights important aspects of the non-collusive threat model and evaluation that we will strengthen in the revised manuscript. We address each major comment point by point below.
read point-by-point responses
-
Referee: [§4] §4 (XFED Attack Design): The poison-generation procedure is described as operating solely from the current global model and local data, yet the manuscript provides no analysis or bound showing that independently generated poisons will form a sufficiently tight cluster to survive coordinate-wise median, trimmed mean, or Krum aggregation when local data distributions differ across attackers. Without such justification or variance measurements, the central claim that the attack remains effective under realistic heterogeneity is not yet load-bearing.
Authors: We agree that the manuscript would benefit from explicit analysis of poison clustering under heterogeneity. While the empirical results on six heterogeneous benchmark datasets support effectiveness, we did not include dedicated variance measurements or a bound. In the revision we will add to §4 empirical measurements of cosine similarity and directional variance among independently generated poisons across attackers with differing local distributions (using Dirichlet heterogeneity), together with a short argument that the shared global model and common objective induce sufficient alignment for the poisons to survive the listed aggregators. This addition will make the heterogeneity claim load-bearing. revision: yes
-
Referee: [§5.2] §5.2 (Experimental Evaluation): The reported success rates against the eight defenses lack per-run standard deviations, ablation on attacker fraction and data heterogeneity levels, and explicit comparison of poison-direction variance across independent attackers. These omissions make it impossible to assess whether the observed degradation is robust or sensitive to the implicit assumption of statistically identical attacker data.
Authors: We acknowledge that the current evaluation omits these elements. The revised manuscript will add per-run standard deviations to all success-rate tables and figures in §5.2, new ablation tables varying attacker fraction (10–30 %) and data heterogeneity (Dirichlet α values), and an explicit comparison (new figure or table) of poison-direction variance across independent attackers. These additions will demonstrate that the observed degradation is robust rather than sensitive to identical attacker data. revision: yes
-
Referee: [§3.1] §3.1 (Threat Model): The non-collusive model explicitly disallows knowledge of server defenses, but the XFED procedure appears to require a scaling or direction choice that could implicitly depend on defense behavior; the manuscript should clarify whether any hyper-parameter tuning was performed with oracle access to the aggregator.
Authors: We clarify that XFED is constructed to be aggregation-agnostic and uses no information about server defenses. The scaling factor and direction are determined exclusively from the adversarial objective, the current global model, and local data; no hyper-parameter was tuned with oracle access to any aggregator. We will revise §3.1 and §4 to state this explicitly, list the fixed hyper-parameter values, and confirm that all choices follow the non-collusive threat model without reference to defense behavior. revision: partial
Circularity Check
No circularity: purely empirical attack proposal
full rationale
The paper introduces a non-collusive attack model and XFED via textual definition and empirical evaluation on six datasets against eight defenses. No equations, parameter fittings, or derivations appear in the provided text. Central claims rest on experimental outcomes rather than reducing to self-definitions, fitted inputs renamed as predictions, or load-bearing self-citations. The work is self-contained against external benchmarks with no reduction of any result to its own inputs by construction.
Axiom & Free-Parameter Ledger
axioms (1)
- domain assumption Independent attackers sharing only a common objective can still produce effective model poisoning updates without communication or knowledge of server defenses.
Reference graph
Works this paper leans on
-
[1]
A public domain dataset for human activity recognition using smartphones
Davide Anguita, Alessandro Ghio, Luca Oneto, Xavier Parra, Jorge Luis Reyes-Ortiz, et al. A public domain dataset for human activity recognition using smartphones. InEsann, pages 3–4, 2013
2013
-
[2]
How to backdoor federated learning
Eugene Bagdasaryan, Andreas Veit, Yiqing Hua, Deborah Estrin, and Vitaly Shmatikov. How to backdoor federated learning. InInternational Conference on Artificial Intelli- gence and Statistics, pages 2938–2948. PMLR, 2020
2020
-
[3]
A little is enough: Circumventing defenses for distributed learning.Ad- vances in Neural Information Processing Systems, 32, 2019
Gilad Baruch, Moran Baruch, and Yoav Goldberg. A little is enough: Circumventing defenses for distributed learning.Ad- vances in Neural Information Processing Systems, 32, 2019
2019
-
[4]
Jeremy Bernstein, Jiawei Zhao, Kamyar Azizzadenesheli, and Anima Anandkumar. signsgd with majority vote is communication efficient and fault tolerant.arXiv preprint arXiv:1810.05291, 2018
-
[5]
Analyzing federated learning through an adversarial lens
Arjun Nitin Bhagoji, Supriyo Chakraborty, Prateek Mittal, and Seraphin Calo. Analyzing federated learning through an adversarial lens. InInternational Conference on Machine Learning, pages 634–643. PMLR, 2019
2019
-
[7]
Machine learning with adversaries: Byzantine tolerant gradient descent.Advances in neural information processing systems, 30, 2017
Peva Blanchard, El Mahdi El Mhamdi, Rachid Guerraoui, and Julien Stainer. Machine learning with adversaries: Byzantine tolerant gradient descent.Advances in neural information processing systems, 30, 2017
2017
-
[8]
Mpaf: Model poi- soning attacks to federated learning based on fake clients
Xiaoyu Cao and Neil Zhenqiang Gong. Mpaf: Model poi- soning attacks to federated learning based on fake clients. InProceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages 3396–3404, 2022
2022
-
[9]
Fltrust: Byzantine-robust federated learning via trust bootstrapping
Xiaoyu Cao, Minghong Fang, Jia Liu, and Neil Zhenqiang Gong. Fltrust: Byzantine-robust federated learning via trust bootstrapping. InISOC Network and Distributed System Security Symposium (NDSS), 2021
2021
-
[10]
Draco: Byzantine-resilient distributed train- ing via redundant gradients
Lingjiao Chen, Hongyi Wang, Zachary Charles, and Dimitris Papailiopoulos. Draco: Byzantine-resilient distributed train- ing via redundant gradients. InInternational Conference on Machine Learning, pages 903–912. PMLR, 2018
2018
-
[11]
Emnist: Extending mnist to handwritten letters
Gregory Cohen, Saeed Afshar, Jonathan Tapson, and Andre Van Schaik. Emnist: Extending mnist to handwritten letters. In2017 international joint conference on neural networks (IJCNN), pages 2921–2926. IEEE, 2017
2017
-
[12]
The mnist database of handwritten digit images for machine learning research [best of the web].IEEE signal processing magazine, 29(6):141–142, 2012
Li Deng. The mnist database of handwritten digit images for machine learning research [best of the web].IEEE signal processing magazine, 29(6):141–142, 2012
2012
-
[13]
Being robust (in high dimensions) can be practical
Ilias Diakonikolas, Gautam Kamath, Daniel M Kane, Jerry Li, Ankur Moitra, and Alistair Stewart. Being robust (in high dimensions) can be practical. InInternational Conference on Machine Learning, pages 999–1008. PMLR, 2017
2017
-
[14]
Sever: A robust meta- algorithm for stochastic optimization
Ilias Diakonikolas, Gautam Kamath, Daniel Kane, Jerry Li, Jacob Steinhardt, and Alistair Stewart. Sever: A robust meta- algorithm for stochastic optimization. InInternational Confer- ence on Machine Learning, pages 1596–1606. PMLR, 2019
2019
-
[15]
Malware and virus statistics 2024: The trends you need to know about
Jessica Valasek Estenssoro. Malware and virus statistics 2024: The trends you need to know about. https://www.avg. com/en/signal/malware-statistics, 2024. Pub- lished on A VG Antivirus’ website
2024
-
[16]
Falowo, Murat Ozer, Chengcheng Li, and Jacques Bou Abdo
Olufunsho I. Falowo, Murat Ozer, Chengcheng Li, and Jacques Bou Abdo. Evolving malware and ddos attacks: Decadal longitudinal study.IEEE Access, 12:39221–39237, 2024
2024
-
[17]
Local model poisoning attacks to {Byzantine-Robust} feder- ated learning
Minghong Fang, Xiaoyu Cao, Jinyuan Jia, and Neil Gong. Local model poisoning attacks to {Byzantine-Robust} feder- ated learning. In29th USENIX Security Symposium (USENIX Security 20), pages 1605–1622, 2020
2020
-
[18]
Freqfed: A frequency analysis-based approach for mitigating poisoning attacks in federated learning
Hossein Fereidooni, Alessandro Pegoraro, Phillip Rieger, Alexandra Dmitrienko, and Ahmad-Reza Sadeghi. Freqfed: A frequency analysis-based approach for mitigating poisoning attacks in federated learning. 2025
2025
-
[19]
arXiv preprint arXiv:1808.04866 (2018)
Clement Fung, Chris JM Yoon, and Ivan Beschastnikh. Miti- gating sybils in federated learning poisoning.arXiv preprint arXiv:1808.04866, 2018
-
[20]
The hidden vul- nerability of distributed learning in byzantium
Rachid Guerraoui, S´ebastien Rouault, et al. The hidden vul- nerability of distributed learning in byzantium. InInterna- tional Conference on Machine Learning, pages 3521–3530. PMLR, 2018
2018
-
[21]
Federated Learning for Mobile Keyboard Prediction
Andrew Hard, Kanishka Rao, Rajiv Mathews, Swaroop Ra- maswamy, Franc ¸oise Beaufays, Sean Augenstein, Hubert Eichner, Chlo ´e Kiddon, and Daniel Ramage. Federated learning for mobile keyboard prediction.arXiv preprint arXiv:1811.03604, 2018
work page Pith review arXiv 2018
-
[22]
Understanding robust and exploratory data analysis
David C Hoaglin, Frederick Mosteller, and John W Tukey. Understanding robust and exploratory data analysis. John Wiley & Sons, 2000
2000
-
[23]
Cronus: Ro- bust and heterogeneous collaborative learning with black-box knowledge transfer,
Chang Hongyan, Shejwalkar Virat, Shokri Reza, and Houmansadr Amir. Cronus: Robust and heterogeneous col- laborative learning with black-box knowledge transfer.arXiv preprint arXiv:1912.11279, 2019
-
[24]
Cross-silo feder- ated learning: Challenges and opportunities.arXiv preprint arXiv:2206.12949, 2022
Chao Huang, Jianwei Huang, and Xin Liu. Cross-silo feder- ated learning: Challenges and opportunities.arXiv preprint arXiv:2206.12949, 2022
-
[25]
Personalized cross-silo federated learning on non-iid data.Proceedings of the AAAI Conference on Artificial Intelligence, 35(9):7865– 7873, 2021
Yutao Huang, Lingyang Chu, Zirui Zhou, Lanjun Wang, Jiangchuan Liu, Jian Pei, and Yong Zhang. Personalized cross-silo federated learning on non-iid data.Proceedings of the AAAI Conference on Artificial Intelligence, 35(9):7865– 7873, 2021
2021
-
[26]
Quality Press, 1993
Boris Iglewicz and David C Hoaglin.Volume 16: how to detect and handle outliers. Quality Press, 1993
1993
-
[27]
Manipulating machine learn- ing: Poisoning attacks and countermeasures for regression learning
Matthew Jagielski, Alina Oprea, Battista Biggio, Chang Liu, Cristina Nita-Rotaru, and Bo Li. Manipulating machine learn- ing: Poisoning attacks and countermeasures for regression learning. In2018 IEEE Symposium on Security and Privacy (SP), pages 19–35. IEEE, 2018
2018
-
[28]
Breaking the centralized barrier for cross-device federated learning
Sai Praneeth Karimireddy, Martin Jaggi, Satyen Kale, Mehryar Mohri, Sashank Reddi, Sebastian U Stich, and Ananda Theertha Suresh. Breaking the centralized barrier for cross-device federated learning. 34:28663–28676, 2021. 9
2021
-
[29]
Federated Optimization: Distributed Machine Learning for On-Device Intelligence
Jakub Koneˇcn`y, H Brendan McMahan, Daniel Ramage, and Peter Richt ´arik. Federated optimization: Distributed ma- chine learning for on-device intelligence.arXiv preprint arXiv:1610.02527, 2016
work page Pith review arXiv 2016
-
[30]
Federated Learning: Strategies for Improving Communication Efficiency
Jakub Koneˇcn`y, H Brendan McMahan, Felix X Yu, Peter Richt´arik, Ananda Theertha Suresh, and Dave Bacon. Feder- ated learning: Strategies for improving communication effi- ciency.arXiv preprint arXiv:1610.05492, 2016
work page internal anchor Pith review arXiv 2016
-
[31]
Byzshield: An efficient and robust system for distributed training.Proceedings of Machine Learning and Systems, 3: 812–828, 2021
Konstantinos Konstantinidis and Aditya Ramamoorthy. Byzshield: An efficient and robust system for distributed training.Proceedings of Machine Learning and Systems, 3: 812–828, 2021
2021
-
[32]
Learning multiple layers of features from tiny images, 2009
Alex Krizhevsky, Geoffrey Hinton, et al. Learning multiple layers of features from tiny images, 2009
2009
-
[33]
Stuxnet: Dissecting a cyberwarfare weapon
Ralph Langner. Stuxnet: Dissecting a cyberwarfare weapon. IEEE Security & Privacy, 9(3):49–51, 2011
2011
-
[34]
Rsa: Byzantine-robust stochastic aggregation methods for distributed learning from heterogeneous datasets
Liping Li, Wei Xu, Tianyi Chen, Georgios B Giannakis, and Qing Ling. Rsa: Byzantine-robust stochastic aggregation methods for distributed learning from heterogeneous datasets. InProceedings of the AAAI conference on artificial intelli- gence, pages 1544–1551, 2019
2019
-
[35]
An ex- perimental study of byzantine-robust aggregation schemes in federated learning.IEEE Transactions on Big Data, 2023
Shenghui Li, Edith C-H Ngai, and Thiemo V oigt. An ex- perimental study of byzantine-robust aggregation schemes in federated learning.IEEE Transactions on Big Data, 2023
2023
-
[36]
Ensemble distillation for robust model fusion in federated learning.Advances in neural information processing systems, 33:2351–2363, 2020
Tao Lin, Lingjing Kong, Sebastian U Stich, and Martin Jaggi. Ensemble distillation for robust model fusion in federated learning.Advances in neural information processing systems, 33:2351–2363, 2020
2020
-
[37]
Universal multi-party poisoning attacks
Saeed Mahloujifar, Mohammad Mahmoody, and Ameer Mo- hammed. Universal multi-party poisoning attacks. InInterna- tional Conference on Machine Learning, pages 4274–4283. PMLR, 2019
2019
-
[38]
Communication- Efficient Learning of Deep Networks from Decentralized Data
Brendan McMahan, Eider Moore, Daniel Ramage, Seth Hampson, and Blaise Aguera y Arcas. Communication- Efficient Learning of Deep Networks from Decentralized Data. InProceedings of the 20th International Conference on Artificial Intelligence and Statistics, pages 1273–1282. PMLR, 2017
2017
-
[39]
Communication- efficient learning of deep networks from decentralized data
Brendan McMahan, Eider Moore, Daniel Ramage, Seth Hampson, and Blaise Aguera y Arcas. Communication- efficient learning of deep networks from decentralized data. In Artificial intelligence and statistics, pages 1273–1282. PMLR, 2017
2017
-
[40]
Towards poisoning of federated support vector machines with data poisoning attacks
Israt Jahan Mouri, Muhammad Ridowan, and Muhammad Ab- dullah Adnan. Towards poisoning of federated support vector machines with data poisoning attacks. InProceedings of the 13th International Conference on Cloud Computing and Ser- vices Science - CLOSER,, pages 24–33. INSTICC, SciTePress, 2023
2023
-
[41]
Data poisoning attacks and mitigation strate- gies on federated support vector machines.SN Computer Science, 5(2):241, 2024
Israt Jahan Mouri, Muhammad Ridowan, and Muhammad Ab- dullah Adnan. Data poisoning attacks and mitigation strate- gies on federated support vector machines.SN Computer Science, 5(2):241, 2024
2024
-
[42]
Towards poisoning of deep learning algorithms with back-gradient optimization
Luis Mu ˜noz-Gonz´alez, Battista Biggio, Ambra Demontis, Andrea Paudice, Vasin Wongrassamee, Emil C Lupu, and Fabio Roli. Towards poisoning of deep learning algorithms with back-gradient optimization. InProceedings of the 10th ACM workshop on artificial intelligence and security, pages 27–38, 2017
2017
-
[43]
{FLAME}: Taming backdoors in federated learning
Thien Duc Nguyen, Phillip Rieger, Roberta De Viti, Huili Chen, Bj ¨orn B Brandenburg, Hossein Yalame, Helen M¨ollering, Hossein Fereidooni, Samuel Marchal, Markus Miettinen, et al. {FLAME}: Taming backdoors in federated learning. In31st USENIX Security Symposium (USENIX Security 22), pages 1415–1432, 2022
2022
-
[44]
Detox: A redundancy-based framework for faster and more robust gradient aggregation.Advances in Neural Information Processing Systems, 32, 2019
Shashank Rajput, Hongyi Wang, Zachary Charles, and Dim- itris Papailiopoulos. Detox: A redundancy-based framework for faster and more robust gradient aggregation.Advances in Neural Information Processing Systems, 32, 2019
2019
-
[45]
Trustfed: A framework for fair and trustworthy cross-device federated learning in iiot.IEEE Transactions on Industrial Informatics, 17(12):8485–8494, 2021
Muhammad Habib ur Rehman, Ahmed Mukhtar Dirir, Khaled Salah, Ernesto Damiani, and Davor Svetinovic. Trustfed: A framework for fair and trustworthy cross-device federated learning in iiot.IEEE Transactions on Industrial Informatics, 17(12):8485–8494, 2021
2021
-
[46]
Characterization of android malwares and their families.ACM Comput
Tejpal Sharma and Dhavleesh Rattan. Characterization of android malwares and their families.ACM Comput. Surv., 57 (5), 2025
2025
-
[47]
Manipulating the byzantine: Optimizing model poisoning attacks and defenses for federated learning
Virat Shejwalkar and Amir Houmansadr. Manipulating the byzantine: Optimizing model poisoning attacks and defenses for federated learning. InNDSS, 2021
2021
-
[48]
Back to the drawing board: A critical evalu- ation of poisoning attacks on production federated learning
Virat Shejwalkar, Amir Houmansadr, Peter Kairouz, and Daniel Ramage. Back to the drawing board: A critical evalu- ation of poisoning attacks on production federated learning. InIEEE Symposium on Security and Privacy, 2022
2022
-
[49]
Vahunt: Warding off new repackaged android malware in app-virtualization’s cloth- ing
Luman Shi, Jiang Ming, Jianming Fu, Guojun Peng, Dong- peng Xu, Kun Gao, and Xuanchen Pan. Vahunt: Warding off new repackaged android malware in app-virtualization’s cloth- ing. InProceedings of the 2020 ACM SIGSAC conference on computer and communications security, page 535–549, New York, NY , USA, 2020. Association for Computing Machinery
2020
-
[50]
Data poisoning attacks on federated machine learning.IEEE Internet of Things Journal, 2021
Gan Sun, Yang Cong, Jiahua Dong, Qiang Wang, Lingjuan Lyu, and Ji Liu. Data poisoning attacks on federated machine learning.IEEE Internet of Things Journal, 2021
2021
-
[51]
Can you really backdoor federated learning?
Ziteng Sun, Peter Kairouz, Ananda Theertha Suresh, and H Brendan McMahan. Can you really backdoor federated learning?arXiv preprint arXiv:1911.07963, 2019
-
[52]
Data poisoning attacks against federated learning systems
Vale Tolpegin, Stacey Truex, Mehmet Emre Gursoy, and Ling Liu. Data poisoning attacks against federated learning systems. InEuropean Symposium on Research in Computer Security, pages 480–501. Springer, 2020
2020
-
[53]
Attack of the tails: Yes, you really can backdoor federated learning.Advances in Neural Information Processing Systems, 33:16070–16084, 2020
Hongyi Wang, Kartik Sreenivasan, Shashank Rajput, Harit Vishwakarma, Saurabh Agarwal, Jy-yong Sohn, Kangwook Lee, and Dimitris Papailiopoulos. Attack of the tails: Yes, you really can backdoor federated learning.Advances in Neural Information Processing Systems, 33:16070–16084, 2020
2020
-
[54]
Customer purchase behavior prediction from payment datasets
Yu-Ting Wen, Pei-Wen Yeh, Tzu-Hao Tsai, Wen-Chih Peng, and Hong-Han Shuai. Customer purchase behavior prediction from payment datasets. InProceedings of the Eleventh ACM International Conference on Web Search and Data Mining, pages 628–636, 2018
2018
-
[55]
Fashion-MNIST: a Novel Image Dataset for Benchmarking Machine Learning Algorithms
Han Xiao, Kashif Rasul, and Roland V ollgraf. Fashion-mnist: a novel image dataset for benchmarking machine learning algorithms.arXiv preprint arXiv:1708.07747, 2017
work page internal anchor Pith review arXiv 2017
-
[56]
Fall of empires: Breaking byzantine-tolerant sgd by inner product 10 manipulation
Cong Xie, Oluwasanmi Koyejo, and Indranil Gupta. Fall of empires: Breaking byzantine-tolerant sgd by inner product 10 manipulation. InUncertainty in Artificial Intelligence, pages 261–270. PMLR, 2020
2020
-
[57]
Model poisoning attacks to federated learning via multi-round con- sistency
Yueqi Xie, Minghong Fang, and Neil Zhenqiang Gong. Model poisoning attacks to federated learning via multi-round con- sistency. InProceedings of the Computer Vision and Pattern Recognition Conference, pages 15454–15463, 2025
2025
-
[58]
Byzantine-robust federated learning through collaborative malicious gradient filtering
Jian Xu, Shao-Lun Huang, Linqi Song, and Tian Lan. Byzantine-robust federated learning through collaborative malicious gradient filtering. In2022 IEEE 42nd International Conference on Distributed Computing Systems (ICDCS), pages 1223–1235. IEEE, 2022
2022
-
[59]
Dong Yin, Yudong Chen, Ramchandran Kannan, and Peter Bartlett. Byzantine-robust distributed learning: Towards opti- mal statistical rates. InInternational Conference on Machine Learning, pages 5650–5659. PMLR, 2018. 11 XFED: Non-Collusive Model Poisoning Attack Against Byzantine-Robust Federated Classifiers Supplementary Material A. Design choices for XFE...
discussion (0)
Sign in with ORCID, Apple, or X to comment. Anyone can read and Pith papers without signing in.