pith. machine review for the scientific record. sign in

arxiv: 2604.03862 · v1 · submitted 2026-04-04 · 💻 cs.CR · cs.DC· cs.LG

Recognition: 2 theorem links

· Lean Theorem

SecureAFL: Secure Asynchronous Federated Learning

Anjun Gao, Feng Wang, Minghong Fang, Yueyang Quan, Zhenglin Wan, Zhuqing Liu

Authors on Pith no claims yet

Pith reviewed 2026-05-13 16:52 UTC · model grok-4.3

classification 💻 cs.CR cs.DCcs.LG
keywords asynchronous federated learningpoisoning attacksByzantine-robust aggregationanomaly detectionclient contribution estimationsecure FL
0
0 comments X

The pith

SecureAFL secures asynchronous federated learning by detecting anomalous updates and estimating missing client contributions before robust aggregation.

A machine-rendered reading of the paper's core claim, the machinery that carries it, and where it could break.

The paper proposes SecureAFL to address poisoning attacks that exploit the decentralized nature of asynchronous federated learning. It detects and discards anomalous updates from malicious clients while estimating the model contributions that would have come from clients that have not yet responded. These received and estimated updates are then combined using Byzantine-robust aggregation methods such as the coordinate-wise median. This design allows the server to update the global model promptly without waiting for all clients, addressing the straggler problem while maintaining security. Experiments on real-world datasets confirm that the framework preserves model performance under various attack scenarios.

Core claim

SecureAFL improves the robustness of asynchronous FL by detecting and discarding anomalous updates while estimating the contributions of missing clients, and it utilizes Byzantine-robust aggregation techniques such as coordinate-wise median to integrate the received and estimated updates.

What carries the argument

Anomaly detection for discarding bad updates combined with missing-client estimation followed by Byzantine-robust aggregation such as coordinate-wise median.

If this is right

  • The global model updates immediately upon receiving any valid client update without waiting for stragglers.
  • Poisoning attacks are mitigated by discarding detected anomalies while filling gaps from absent clients.
  • Byzantine-robust methods integrate both received and estimated updates without requiring strong server assumptions.
  • Model performance holds across real-world datasets even with partial client participation.

Where Pith is reading between the lines

These are editorial extensions of the paper, not claims the author makes directly.

  • The detection-plus-estimation approach could extend to other decentralized training settings with unreliable participation.
  • Adaptive attackers might require new detection heuristics beyond those evaluated in the current experiments.
  • Refining the estimation step with client behavior models could further reduce the impact of long delays.

Load-bearing premise

That anomalous updates can be reliably detected and that estimates for missing clients are sufficiently accurate to not degrade global model performance.

What would settle it

An experiment showing that an advanced poisoning attack evades the anomaly detector and causes model accuracy to fall below that of undefended asynchronous FL.

Figures

Figures reproduced from arXiv: 2604.03862 by Anjun Gao, Feng Wang, Minghong Fang, Yueyang Quan, Zhenglin Wan, Zhuqing Liu.

Figure 1
Figure 1. Figure 1: Impact of fraction of malicious clients on Fashion-MNIST dataset. [PITH_FULL_IMAGE:figures/full_fig_p009_1.png] view at source ↗
Figure 2
Figure 2. Figure 2: Impact of client delay on Fashion-MNIST dataset. [PITH_FULL_IMAGE:figures/full_fig_p010_2.png] view at source ↗
Figure 3
Figure 3. Figure 3: Impact of degree of Non-IID on Fashion-MNIST dataset. [PITH_FULL_IMAGE:figures/full_fig_p010_3.png] view at source ↗
Figure 4
Figure 4. Figure 4: Impact of total number of clients on Fashion-MNIST dataset. [PITH_FULL_IMAGE:figures/full_fig_p015_4.png] view at source ↗
read the original abstract

Federated learning (FL) enables multiple clients to collaboratively train a global machine learning model via a server without sharing their private training data. In traditional FL, the system follows a synchronous approach, where the server waits for model updates from numerous clients before aggregating them to update the global model. However, synchronous FL is hindered by the straggler problem. To address this, the asynchronous FL architecture allows the server to update the global model immediately upon receiving any client's local model update. Despite its advantages, the decentralized nature of asynchronous FL makes it vulnerable to poisoning attacks. Several defenses tailored for asynchronous FL have been proposed, but these mechanisms remain susceptible to advanced attacks or rely on unrealistic server assumptions. In this paper, we introduce SecureAFL, an innovative framework designed to secure asynchronous FL against poisoning attacks. SecureAFL improves the robustness of asynchronous FL by detecting and discarding anomalous updates while estimating the contributions of missing clients. Additionally, it utilizes Byzantine-robust aggregation techniques, such as coordinate-wise median, to integrate the received and estimated updates. Extensive experiments on various real-world datasets demonstrate the effectiveness of SecureAFL.

Editorial analysis

A structured set of objections, weighed in public.

Desk editor's note, referee report, simulated authors' rebuttal, and a circularity audit. Tearing a paper down is the easy half of reading it; the pith above is the substance, this is the friction.

Referee Report

3 major / 0 minor

Summary. The paper introduces SecureAFL, a framework for asynchronous federated learning that detects and discards anomalous (poisoned) client updates, estimates contributions from missing clients, and applies Byzantine-robust aggregation such as coordinate-wise median to produce the global model update. The abstract states that this combination improves robustness against poisoning attacks compared to prior asynchronous defenses, with effectiveness shown via extensive experiments on real-world datasets.

Significance. If the detection and estimation components prove reliable, the work would address a practical vulnerability in asynchronous FL (straggler tolerance without sacrificing security), extending standard robust aggregation techniques to the async setting. The use of real-world datasets is a positive element for empirical grounding.

major comments (3)
  1. [Abstract] Abstract: the central robustness claim rests on detecting and discarding anomalous updates, yet no detection rule (distance metric, statistical test, or threshold) is described. Without this, it is impossible to evaluate whether an adaptive adversary can craft updates that evade detection while still biasing the coordinate-wise median.
  2. [Abstract] Abstract: the estimation procedure for missing-client contributions is unspecified (e.g., no mention of similarity-based imputation, temporal prediction, or any other method). This step is load-bearing because inaccurate estimates can degrade the median aggregate, especially under non-IID data distributions that are common in FL.
  3. [Abstract] Abstract: the attack models considered are not stated (e.g., whether the adversary controls a fixed fraction of clients, can adapt to the detection heuristic, or targets the estimation step). The claim of effectiveness against “poisoning attacks” therefore cannot be assessed without knowing the threat model against which the heuristics were tested.

Simulated Author's Rebuttal

3 responses · 0 unresolved

We thank the referee for the constructive comments on our manuscript. We address each major point below and will revise the abstract for greater specificity while preserving the technical content already present in the body of the paper.

read point-by-point responses
  1. Referee: [Abstract] Abstract: the central robustness claim rests on detecting and discarding anomalous updates, yet no detection rule (distance metric, statistical test, or threshold) is described. Without this, it is impossible to evaluate whether an adaptive adversary can craft updates that evade detection while still biasing the coordinate-wise median.

    Authors: We agree that the abstract is too high-level on this point. The detection rule (a distance-based metric with a median-absolute-deviation threshold) is fully specified in Section 3.2, together with analysis showing that adaptive updates exceeding the threshold are discarded before aggregation. In the revised manuscript we will add one sentence to the abstract summarizing the rule so that the robustness claim can be evaluated directly from the abstract. revision: yes

  2. Referee: [Abstract] Abstract: the estimation procedure for missing-client contributions is unspecified (e.g., no mention of similarity-based imputation, temporal prediction, or any other method). This step is load-bearing because inaccurate estimates can degrade the median aggregate, especially under non-IID data distributions that are common in FL.

    Authors: We accept the observation. The estimation procedure (similarity-based imputation from historical client updates) is described in Section 4.1 and evaluated under non-IID partitions in Section 5. We will revise the abstract to include a brief clause stating that missing contributions are estimated via similarity-based imputation, thereby clarifying the load-bearing step without altering the technical approach. revision: yes

  3. Referee: [Abstract] Abstract: the attack models considered are not stated (e.g., whether the adversary controls a fixed fraction of clients, can adapt to the detection heuristic, or targets the estimation step). The claim of effectiveness against “poisoning attacks” therefore cannot be assessed without knowing the threat model against which the heuristics were tested.

    Authors: The threat model (Byzantine adversary controlling up to 20 % of clients, with adaptive capability against the detection heuristic) is stated in Section 5.1. We will update the abstract to reference this threat model explicitly, allowing readers to assess the scope of the claimed robustness. revision: yes

Circularity Check

0 steps flagged

No significant circularity; SecureAFL applies standard robust aggregation without self-referential reduction

full rationale

The paper's core framework combines anomaly detection, missing-client estimation, and coordinate-wise median aggregation drawn from prior literature. No equations or steps in the abstract or description reduce any prediction or result to a fitted parameter or self-citation defined by the paper itself. The claims rest on experimental validation of these established techniques in the asynchronous setting rather than any tautological construction or load-bearing self-reference.

Axiom & Free-Parameter Ledger

1 free parameters · 1 axioms · 0 invented entities

The framework rests on standard domain assumptions from federated learning security literature regarding the detectability of poisoning attacks and the utility of robust aggregation; no new entities are invented.

free parameters (1)
  • anomaly_detection_threshold
    Threshold used to identify and discard anomalous updates; likely tuned empirically on datasets.
axioms (1)
  • domain assumption Byzantine-robust aggregation methods such as coordinate-wise median effectively mitigate poisoning attacks in federated learning
    Invoked when integrating received and estimated updates; drawn from prior work without new proof.

pith-pipeline@v0.9.0 · 5504 in / 1255 out tokens · 28768 ms · 2026-05-13T16:52:37.188201+00:00 · methodology

discussion (0)

Sign in with ORCID, Apple, or X to comment. Anyone can read and Pith papers without signing in.

Lean theorems connected to this paper

Citations machine-checked in the Pith Canon. Every link opens the source theorem in the public Lean library.

Reference graph

Works this paper leans on

88 extracted references · 88 canonical work pages · 1 internal anchor

  1. [1]

    d.].Federated Learning: Collaborative Machine Learning without Central- ized Training Data

    [n. d.].Federated Learning: Collaborative Machine Learning without Central- ized Training Data. https://ai.googleblog.com/2017/04/federated-learning- collaborative.html

  2. [2]

    d.].Utilization of FATE in Risk Management of Credit in Small and Micro Enter- prises

    [n. d.].Utilization of FATE in Risk Management of Credit in Small and Micro Enter- prises. https://www.fedai.org/cases/utilization-of-fate-in-risk-management-of- credit-in-small-and-micro-\enterprises/

  3. [3]

    Udacity Dataset.A vailable: https://github.com/udacity/self-driving-car/ (2018)

    2018. Udacity Dataset.A vailable: https://github.com/udacity/self-driving-car/ (2018)

  4. [4]

    Martín Abadi, Ashish Agarwal, Paul Barham, Eugene Brevdo, Zhifeng Chen, Craig Citro, Greg S Corrado, Andy Davis, Jeffrey Dean, Matthieu Devin, et al

  5. [5]

    Tensorflow: Large-scale machine learning on heterogeneous distributed systems.arXiv preprint arXiv:1603.04467(2016)

  6. [6]

    Eugene Bagdasaryan, Andreas Veit, Yiqing Hua, Deborah Estrin, and Vitaly Shmatikov. 2020. How to backdoor federated learning. InAISTATS

  7. [7]

    Yulong Bai, Ying Wang, Xiangrui Xu, Yuhang Yang, Hina Batool, Zahid Iqbal, and Jiuyun Xu. 2025. AsyncDefender: Dynamic trust adaptation and collaborative defense for Byzantine-robust asynchronous federated learning. InComputer Networks

  8. [8]

    Peva Blanchard, El Mahdi El Mhamdi, Rachid Guerraoui, and Julien Stainer

  9. [9]

    In NeurIPS

    Machine learning with adversaries: Byzantine tolerant gradient descent. In NeurIPS

  10. [10]

    Tom Brown, Benjamin Mann, Nick Ryder, Melanie Subbiah, Jared D Kaplan, Prafulla Dhariwal, Arvind Neelakantan, Pranav Shyam, Girish Sastry, Amanda Askell, et al. 2020. Language models are few-shot learners. InNeurIPS

  11. [11]

    Richard H Byrd, Peihuang Lu, Jorge Nocedal, and Ciyou Zhu. 1995. A limited memory algorithm for bound constrained optimization. InSIAM Journal on scientific computing

  12. [12]

    Richard H Byrd, Jorge Nocedal, and Robert B Schnabel. 1994. Representations of quasi-Newton matrices and their use in limited memory methods. InMathemati- cal Programming

  13. [13]

    Xiaoyu Cao, Minghong Fang, Jia Liu, and Neil Zhenqiang Gong. 2021. Fltrust: Byzantine-robust federated learning via trust bootstrapping. InNDSS

  14. [14]

    Xiaoyu Cao, Jinyuan Jia, and Neil Zhenqiang Gong. 2021. Provably secure federated learning against malicious clients. InAAAI

  15. [15]

    Tianyi Chen, Xiao Jin, Yuejiao Sun, and Wotao Yin. 2020. Vafl: a method of vertical asynchronous federated learning.arXiv preprint arXiv:2007.06081(2020)

  16. [16]

    Yujing Chen, Yue Ning, Martin Slawski, and Huzefa Rangwala. 2020. Asyn- chronous online federated learning for edge devices with non-iid data. InBig Data

  17. [17]

    Yudong Chen, Lili Su, and Jiaming Xu. 2017. Distributed Statistical Machine Learning in Adversarial Settings: Byzantine Gradient Descent. InPOMACS

  18. [18]

    Bart Cox, Abele Mălan, Lydia Y Chen, and Jérémie Decouchant. 2024. Asynchro- nous byzantine federated learning.arXiv preprint arXiv:2406.01438(2024)

  19. [19]

    Georgios Damaskinos, Rachid Guerraoui, Rhicheek Patra, Mahsa Taziki, et al

  20. [20]

    Asynchronous Byzantine machine learning (the case of SGD). InICML

  21. [21]

    Jia Deng, Wei Dong, Richard Socher, Li-Jia Li, Kai Li, and Li Fei-Fei. 2009. Imagenet: A large-scale hierarchical image database. InCVPR

  22. [22]

    Zhihao Dou, Jiaqi Wang, Wei Sun, Zhuqing Liu, and Minghong Fang. 2025. Toward Malicious Clients Detection in Federated Learning. InASIACCS

  23. [23]

    Minghong Fang, Xiaoyu Cao, Jinyuan Jia, and Neil Gong. 2020. Local model poisoning attacks to Byzantine-robust federated learning. InUSENIX Security Symposium

  24. [24]

    Minghong Fang, Jia Liu, Neil Zhenqiang Gong, and Elizabeth S Bentley. 2022. Aflguard: Byzantine-robust asynchronous federated learning. InACSAC

  25. [25]

    Minghong Fang, Zhuqing Liu, Xuecen Zhao, and Jia Liu. 2025. Byzantine-Robust Federated Learning over Ring-All-Reduce Distributed Computing. InCompanion Proceedings of the ACM on Web Conference 2025

  26. [26]

    Minghong Fang, Seyedsina Nabavirazavi, Zhuqing Liu, Wei Sun, Sun- dararaja Sitharama Iyengar, and Haibo Yang. 2025. Do we really need to design new byzantine-robust aggregation rules?. InNDSS

  27. [27]

    Minghong Fang, Xilong Wang, and Neil Zhenqiang Gong. 2025. Provably Robust Federated Reinforcement Learning. InThe Web Conference

  28. [28]

    Minghong Fang, Zifan Zhang, Prashant Khanduri, Jia Liu, Songtao Lu, Yuchen Liu, Neil Gong, et al. 2024. Byzantine-robust decentralized federated learning. In CCS

  29. [29]

    Xiuwen Fang, Mang Ye, and Xiyuan Yang. 2023. Robust heterogeneous federated learning under data corruption. InICCV

  30. [30]

    Lei Feng, Yiqi Zhao, Shaoyong Guo, Xuesong Qiu, Wenjing Li, and Peng Yu. 2021. BAFL: A blockchain-based asynchronous federated learning framework. InIEEE Transactions on Computers

  31. [31]

    Hossein Fereidooni, Alessandro Pegoraro, Phillip Rieger, Alexandra Dmitrienko, and Ahmad-Reza Sadeghi. 2024. Freqfed: A frequency analysis-based approach for mitigating poisoning attacks in federated learning. InNDSS

  32. [32]

    Clement Fung, Chris JM Yoon, and Ivan Beschastnikh. 2020. The limitations of federated learning in sybil settings. InRAID

  33. [33]

    Rachid Guerraoui, Sébastien Rouault, et al . 2018. The hidden vulnerability of distributed learning in byzantium. InICML

  34. [34]

    Andrew Hard, Antonious M Girgis, Ehsan Amid, Sean Augenstein, Lara Mc- Connaughey, Rajiv Mathews, and Rohan Anil. 2024. Learning from straggler clients in federated learning.arXiv preprint arXiv:2403.09086(2024)

  35. [35]

    Kaiming He, Xiangyu Zhang, Shaoqing Ren, and Jian Sun. 2016. Deep residual learning for image recognition. InCVPR

  36. [36]

    Dzmitry Huba, John Nguyen, Kshitiz Malik, Ruiyu Zhu, Mike Rabbat, Ashkan Yousefpour, Carole-Jean Wu, Hongyuan Zhan, Pavel Ustinov, Harish Srinivas, et al. 2022. Papaya: Practical, private, and scalable federated learning. InMLSys

  37. [37]

    Ehsanul Kabir, Zeyu Song, Md Rafi Ur Rashid, and Shagufta Mehnaz. 2024. Flshield: a validation based federated learning framework to defend against poisoning attacks. InIEEE Symposium on Security and Privacy

  38. [38]

    Sai Praneeth Karimireddy, Lie He, and Martin Jaggi. 2021. Learning from history for byzantine robust optimization. InICML

  39. [39]

    Sai Praneeth Karimireddy, Lie He, and Martin Jaggi. 2022. Byzantine-robust learning on heterogeneous datasets via bucketing. InICLR

  40. [40]

    Krizhevsky and G

    A. Krizhevsky and G. Hinton. 2009. Learning multiple layers of features from tiny images.Handbook of Systemic Autoimmune Diseases(2009)

  41. [41]

    Kavita Kumari, Phillip Rieger, Hossein Fereidooni, Murtuza Jadliwala, and Ahmad- Reza Sadeghi. 2023. Baybfed: Bayesian backdoor defense for federated learning. InIEEE Symposium on Security and Privacy

  42. [42]

    2012.Real and functional analysis

    Serge Lang. 2012.Real and functional analysis. Vol. 142. Springer Science & Business Media

  43. [43]

    Haoyang Li, Qingqing Ye, Haibo Hu, Jin Li, Leixia Wang, Chengfang Fang, and Jie Shi. 2023. 3dfed: Adaptive and extensible framework for covert backdoor attack in federated learning. InIEEE Symposium on Security and Privacy

  44. [44]

    Liping Li, Wei Xu, Tianyi Chen, Georgios B Giannakis, and Qing Ling. 2019. RSA: Byzantine-robust stochastic aggregation methods for distributed learning from heterogeneous datasets. InAAAI

  45. [45]

    Songze Li and Yanbo Dai. 2024. BackdoorIndicator: Leveraging OOD Data for Proactive Backdoor Detection in Federated Learning. InUSENIX Security Sympo- sium

  46. [46]

    Ji Liu, Juncheng Jia, Tianshi Che, Chao Huo, Jiaxiang Ren, Yang Zhou, Huaiyu Dai, and Dejing Dou. 2024. Fedasmu: Efficient asynchronous federated learning with dynamic staleness-aware model update. InAAAI

  47. [47]

    H Brendan McMahan, Eider Moore, Daniel Ramage, Seth Hampson, et al. 2017. Communication-efficient learning of deep networks from decentralized data. In AISTATS

  48. [48]

    El Mahdi El Mhamdi, Rachid Guerraoui, and Sébastien Rouault. 2018. The hidden vulnerability of distributed learning in byzantium. InICML

  49. [49]

    Yinbin Miao, Ziteng Liu, Xinghua Li, Meng Li, Hongwei Li, Kim-Kwang Raymond Choo, and Robert H Deng. 2023. Robust asynchronous federated learning with time-weighted and stale model aggregation. InIEEE Transactions on Dependable and Secure Computing

  50. [50]

    Wenjin Mo, Zhiyuan Li, Minghong Fang, and Mingwei Fang. 2025. Find a Scape- goat: Poisoning Membership Inference Attack and Defense to Federated Learning. InICCV

  51. [51]

    Hamid Mozaffari, Virat Shejwalkar, and Amir Houmansadr. 2023. Every vote counts:{Ranking-Based} training of federated learning to resist poisoning attacks. InUSENIX Security Symposium

  52. [52]

    Luis Muñoz-González, Kenneth T Co, and Emil C Lupu. 2019. Byzantine-robust federated machine learning through adaptive model averaging.arXiv preprint arXiv:1909.05125(2019)

  53. [53]

    John Nguyen, Kshitiz Malik, Hongyuan Zhan, Ashkan Yousefpour, Mike Rab- bat, Mani Malek, and Dzmitry Huba. 2022. Federated learning with buffered asynchronous aggregation. InAISTATS

  54. [54]

    Thien Duc Nguyen, Phillip Rieger, Roberta De Viti, Huili Chen, Björn B Bran- denburg, Hossein Yalame, Helen Möllering, Hossein Fereidooni, Samuel Marchal, Markus Miettinen, et al. 2022. FLAME: Taming backdoors in federated learning. InUSENIX Security Symposium

  55. [55]

    Xudong Pan, Mi Zhang, Duocai Wu, Qifan Xiao, Shouling Ji, and Min Yang. 2020. Justinian’s gaavernor: Robust distributed learning with gradient aggregation agent. InUSENIX Security Symposium

  56. [56]

    Xiaoyi Pang, Chenxu Zhao, Zhibo Wang, Jiahui Hu, Yinggui Wang, Lei Wang, Tao Wei, Kui Ren, and Chun Chen. 2025. PoiSAFL: Scalable Poisoning Attack Framework to Byzantine-resilient Semi-asynchronous Federated Learning. In USENIX Security Symposium

  57. [57]

    Jungwuk Park, Dong-Jun Han, Minseok Choi, and Jaekyun Moon. 2021. Sageflow: Robust federated learning against both stragglers and adversaries. InNeurIPS

  58. [58]

    Adam Paszke, Sam Gross, Francisco Massa, Adam Lerer, James Bradbury, Gregory Chanan, Trevor Killeen, Zeming Lin, Natalia Gimelshein, Luca Antiga, et al. 2019. Pytorch: An imperative style, high-performance deep learning library. InNeurIPS

  59. [59]

    Matthias Paulik, Matt Seigel, Henry Mason, Dominic Telaar, Joris Kluivers, Rogier van Dalen, Chi Wai Lau, Luke Carlson, Filip Granqvist, Chris Vandevelde, et al

  60. [60]

    ASIA CCS ’26, June 01–05, 2026, Bangalore, India Anjun Gao et al

    Federated evaluation and tuning for on-device personalization: System design & applications.arXiv preprint arXiv:2102.08503(2021). ASIA CCS ’26, June 01–05, 2026, Bangalore, India Anjun Gao et al

  61. [61]

    Krishna Pillutla, Sham M Kakade, and Zaid Harchaoui. 2022. Robust aggregation for federated learning.IEEE Transactions on Signal Processing(2022)

  62. [62]

    Virat Shejwalkar and Amir Houmansadr. 2021. Manipulating the Byzantine: Optimizing Model Poisoning Attacks and Defenses for Federated Learning. In NDSS

  63. [63]

    Ziteng Sun, Peter Kairouz, Ananda Theertha Suresh, and H Brendan McMahan

  64. [64]

    Can you really backdoor federated learning?arXiv preprint arXiv:1911.07963 (2019)

  65. [65]

    Rashish Tandon, Qi Lei, Alexandros G Dimakis, and Nikos Karampatziakis. 2017. Gradient coding: Avoiding stragglers in distributed learning. InICML

  66. [66]

    Vale Tolpegin, Stacey Truex, Mehmet Emre Gursoy, and Ling Liu. 2020. Data poisoning attacks against federated learning systems. InESORICS

  67. [67]

    Marten van Dijk, Nhuong V Nguyen, Toan N Nguyen, Lam M Nguyen, Quoc Tran- Dinh, and Phuong Ha Nguyen. 2020. Asynchronous Federated Learning with Reduced Number of Rounds and with Differential Privacy from Less Aggregated Gaussian Noise.arXiv preprint arXiv:2007.09208(2020)

  68. [68]

    Ning Wang, Yang Xiao, Yimin Chen, Yang Hu, Wenjing Lou, and Y Thomas Hou

  69. [69]

    InASIACCS

    FLARE: defending federated learning against model poisoning attacks via latent space representations. InASIACCS

  70. [70]

    Wenbin Wang, Qiwen Ma, Zifan Zhang, Yuchen Liu, Zhuqing Liu, and Minghong Fang. 2025. Poisoning attacks and defenses to federated unlearning. InCompanion Proceedings of the ACM on Web Conference 2025

  71. [71]

    Yongkang Wang, Dihua Zhai, Yufeng Zhan, and Yuanqing Xia. 2022. Rflbat: A robust federated learning algorithm against backdoor attack.arXiv preprint arXiv:2201.03772(2022)

  72. [72]

    Zhongyu Wang, Zhaoyang Zhang, Yuqing Tian, Qianqian Yang, Hangguan Shan, Wei Wang, and Tony QS Quek. 2022. Asynchronous federated learning over wire- less communication networks. InIEEE Transactions on Wireless Communications

  73. [73]

    Fashion-MNIST: a Novel Image Dataset for Benchmarking Machine Learning Algorithms

    Han Xiao, Kashif Rasul, and Roland Vollgraf. 2017.Fashion-MNIST: a Novel Image Dataset for Benchmarking Machine Learning Algorithms. arXiv:cs.LG/1708.07747 [cs.LG]

  74. [74]

    Chulin Xie, Keli Huang, Pin-Yu Chen, and Bo Li. 2019. Dba: Distributed backdoor attacks against federated learning. InICLR

  75. [75]

    Cong Xie, Sanmi Koyejo, and Indranil Gupta. 2019. Asynchronous federated optimization.arXiv preprint arXiv:1903.03934(2019)

  76. [76]

    Cong Xie, Sanmi Koyejo, and Indranil Gupta. 2019. Zeno: Distributed stochastic gradient descent with suspicion-based fault-tolerance. InICML

  77. [77]

    Cong Xie, Sanmi Koyejo, and Indranil Gupta. 2020. Zeno++: Robust fully asyn- chronous SGD. InICML

  78. [78]

    Yueqi Xie, Minghong Fang, and Neil Zhenqiang Gong. 2025. Model Poisoning Attacks to Federated Learning via Multi-Round Consistency. InCVPR

  79. [79]

    Chenhao Xu, Youyang Qu, Yong Xiang, and Longxiang Gao. 2023. Asynchronous federated learning on heterogeneous devices: A survey. InComputer Science Review

  80. [80]

    Jian Xu, Shao-Lun Huang, Linqi Song, and Tian Lan. 2022. Byzantine-robust federated learning through collaborative malicious gradient filtering. InICDCS

Showing first 80 references.